The legal high ‘Ivory Wave’, also known as ‘Ivory Coast’, ‘Purple Wave’ or ‘Vanilla Sky’, is a designer drug that has become popular among clubbers in the United Kingdom (UK) after mephedrone was banned in April 2010.1 Ivory Wave is advertised as a relaxing bath salt and has been freely available on the Internet for about £15 a packet (200mg).2 Three different versions have been on the market, namely, Ivory Wave, Ivory Wave Ultra (also known as Ivory Wave 2), and Ivory Wave 3, although their differences are unknown.3 Studies have shown that Ivory Wave contains cathinone-derived stimulants and, when snorted in high doses, bring similar effects to those of amphetamine and ecstasy.4
Recently, clusters of hospital admissions have been reported around the UK following the use of Ivory Wave. The majority of patients were described to have ‘acute paranoid psychosis’ with severe agitation, which wore off after a couple of days.5 However, some patients had more serious physical complications and had to be monitored in the coronary care units for up to 12 hours.2, 5
Following the increase in the number of Accident and Emergency (A&E) admissions relating to Ivory Wave, healthcare professionals have expressed their concerns about the harmful effects of the substance. The Department of Health has issued advice on handling the users who may present to health services for help.6 However, the literature is limited on the physical and psychological effects of the substance at present. Therefore, we report our case here to describe some of the clinical features of Ivory Wave misuse.
Case presentation
A 26-year-old Caucasian male, with a background history of obsessive-compulsive disorder (OCD) and depression, attended an Accident A&E after snorting approximately 700mg of ‘Ivory Wave Version 3’ in a day. He presented with severe agitation, persecutory delusions, and auditory and visual hallucinations. He stated that ‘people’ were trying to kill him and his mother with a knife, and he could hear their voices threatening to kill him. He also complained of mild/moderate breathing difficulty and involuntary movements of his arms and feet.
In recent years he had been ‘experimenting’ with several legal highs, including Ivory Wave, ‘Charge’ and ‘Mojo’. Five weeks prior to this admission he had visited A&E with a similar presentation, but without persecutory delusions, after sniffing an unknown amount of ‘Ivory Wave 2’. The hallucinations shortly disappeared and he was discharged home.
Otherwise, he was physically fit and well. He had a long history of severe OCD with borderline psychotic features/social anxiety where he was consistently worried about what other people may do to him. There was no personal or family history of psychosis. He was taking clomipramine (125mg) and olanzapine (12.5mg) for OCD and depression but his compliance had been erratic before the admission.
In the present admission he was very agitated and restless. He had non-goal-directed involuntary movements on both arms and feet: repetitive flexion of his elbows and dorsiflexion of his ankles. The physical examination showed that he was pyrexial with a temperature of 37.9°C and had bilaterally dilated pupils. The respiratory examination was normal with an oxygen saturation of 98% on air. The heart rate was slightly fast at 109 beats per minute (bpm) and blood pressure was 122/82 mmHg. The rest of examination was unremarkable with a normal electrocardiogram (ECG). Laboratory investigations revealed a raised white blood cell (WBC) count of 23.5 ´ 109/L and C-reactive protein (CRP) of 332 mg/L. He also had hyponatraemia (Na+ 126 mmol/L) and elevated creatinine kinase (CK = 662 iu/L). Urine drug screen was negative to amphetamine, opiates, cannabinoids and cocaine.
Initially the patient was admitted to a medical ward and commenced on normal saline with intravenous antibiotics (co-amoxiclav) because of the raised inflammatory markers. The body temperature, CRP, and WBC count fell gradually; the CK level dropped as well. The blood culture came back negative. However, the patient remained agitated, was running around the ward, and experiencing visual and auditory hallucinations. He required PRN lorazepam and regular diazepam.
On day five of admission the patient was deemed medically fit and discharged from the medical ward. However, he was still agitated and confused about what had happened. Concerns were raised regarding his mental state, given his past psychiatric history and current problems. A Mental Health Act assessment was performed and the patient was admitted to a psychiatric unit under Section 2 of Mental Health Act 1983 on the same day.
On admission to the unit the persecutory delusions and hallucinations were still present but to a mild degree. The involuntary movements appeared more like muscle twitches, which occurred less frequently. He was observed on the ward and only PRN lorazepam was prescribed together with his regular medication. He then settled on the ward and did not require any further PRN medication.
After a few days the persecutory delusions and hallucinations wore off. The involuntary movements had stopped but left patches of numbness on his right arm, mainly on his fingers. The area was poorly defined and was not localized to a specific dermatome. The numbness disappeared after a few days without any complications.
The patient remained in the psychiatric unit for two weeks before discharge to the care of the Community Mental Health Team (CMHT).
Discussion
The exact components of Ivory Wave are unclear and thought to be variable.4 , 7 Studies have shown that the main ingredients include MDPV (3,4-methylenedioxypyrovalerone); desoxypipradrol, also known as 2-diphenylmethylpiperidine (2-DPMP); and lidocaine.7, 8. 9 MDPV and desoxypipradrol are both synthetic stimulants. MDPV was first synthesized in 196910 and is found as a white or light tan powder.4 Desoxypipradrol was initially developed by a pharmaceutical company in the 1950s as a treatment for Attention Deficit Hyperactivity Disorder (ADHD) and narcolepsy, but it was replaced by other related substances.8 They both act as noradrenaline and dopamine reuptake inhibitors and their effects are thought to be similar to those of amphetamine and cocaine in high doses.9, 11, 12
Ivory Wave is known to bring on several desired effects, including increased energy and sociability, increased concentration, and sexual stimulation. There are also many physical and psychological unwanted effects reported including insomnia, severe agitation/anxiety, panic attacks, kidney pain, stomach cramps, tachycardia, hypertension, dilated pupils, headache, tinnitus, skin prickles and numbness, dizziness, and dyspnoea.4, 13 These effects appear highly dose-dependent4 and have been based on self-reports made by users on online forums.
In the UK there have been several media reports of hospital admissions related to Ivory Wave. The majority of patients were described to have acute psychotic symptoms, namely paranoid delusions, and auditory and/or visual hallucinations. A few of them had physical complications, requiring cardiac monitoring in ICU.1, 2, 5 However, no detailed description of their clinical features were available.
In this case report, we have described chronologically the clinical features of a patient, who presented to A&E after taking Ivory Wave. The patient had a similar presentation to what it was described by Paolo Deluca and his colleagues.4 The patient also experienced involuntary movements in his limbs which has not been reported before in the literature. We have also reported the blood test results: raised inflammatory markers (WCC and CRP) and CK.
The findings from this case, in combination with the limited literature, suggest that the use of ‘Ivory Wave’ can lead to serious complications including over-stimulation of the cardiovascular and nervous system, hyperthermia, and acute psychosis which can potentially result in severe illnesses or even death. The risk of these effects would be greater if the drug was combined with other recreational drugs or alcohol. In addition, the exact composition and strength of the substance may vary and users may not be completely aware of what chemicals they are consuming. This implies that users of Ivory Wave may to taking potentially dangerous substances with unknown effects.
In April 2010, MDPV was made a Class B drug in the UK together with other cathinone derivatives. In addition the UK Home Office has recently announced a ban on the import of desoxypipradrol and any products containing the chemical.15 The use and availability of Ivory Wave in the UK is being closely monitored and may result in further legislative review. Changes in legislation, more research studies, and health education on Ivory Wave could help the public to realize that, irrespective of the legal status of a drug, recreational use of substances may pose a significant risk to their health.
Irritable bowel syndrome (IBS) is a common disorder characterized by abdominal pain and altered bowel habit for at least three months.(1)
IBS is further defined depending on the predominant bowel symptom: IBS with constipation (IBS-C) or IBS with diarrhoea (IBS-D). Those not classified as either IBS-C or IBS-D are considered as mixed IBS (IBS-M). Alternating IBS (IBS-A) defines patients whose bowel habits oscillate from diarrhoea to constipation and vice versa.
IBS is a prevalent and expensive condition that is associated with a significantly impaired health-related quality of life (HRQOL) and reduced work productivity. IBS care consumes over $ 20 billion in both direct and indirect expenditures. Moreover, patients with IBS consume over 50% more health care resources than matched controls without IBS.(1)Based on strict criteria, 7 – 10 % of people have IBS worldwide. Community-based data indicate that diarrhoea-predominant IBS (IBS-D) and mixed IBS (IBS-M) subtypes are more prevalent than constipation-predominant IBS (IBS-C), and that switching among subtype groups may occur. IBS is 1.5 times more common in women than in men, is more common in lower socioeconomic groups, and is more commonly diagnosed
in patients younger than 50 years of age. Prevalence estimates of IBS range from 1 % to more than 20% in North America(7%).(1)In Asia the prevalence is about 5%.(3,4,5)Recently, a School-Based Study in chinareportedthe prevalence of IBS in adolescents and children was 13.25% and the ratio of boys to girls was 1:1.8.(6)Most patient with IBS in India are middle-aged men (mean age 39.4 years).(7)
Underlying pathophysiology:
Given the lack of definitive organic markers for IBS, the absence of aconsolidatedhypothesis regarding its underlying pathophysiology is not surprising. Nevertheless, important advances in research made during the past 50 years have brought us closer than ever to understanding the numerous existing aetiological factors involved in this multifaceted disorder, including environmental factors, genetic factors, previous infection, food intolerance, and abnormal serotonergic signaling in the GI tract.
Environmental factors:
The biopsychosocial model proposed by Engel(8)takes into account the interplay between biologic, psychological, and social factors. This model proposes that there is an underlying biologic predisposition for IBS that may be acted on by environmental factors and psychological stressors, which contribute to disease development, the patient's perception of illness, and impact on treatment outcomes. Different studies have shown that stress can result in release of stress-related hormones that affect colonic sensorimotor function (eg, corticotropin-releasing factor [CRF] and inflammatory mediators [eg, interleukin (IL)-1]), leading to inflammation and altering GI motility and sensation.
Genetics factors :
Twin studies have shown that IBS is twice as prevalent in monozygotic twins than in dizygotic twins.(9,10,11)IBS may be associated with selected gene polymorphisms, including those in IL-10, G-protein GNb3, alpha adrenoceptor, and serotonin reuptake transporter (SERT).
Post-infectious IBS (PI-IBS):
Culture positive gastroenteritis is a very strong risk factor for IBS. Different prospective studies show IBS symptoms developed in 7% to 32% of patients after they recovered from bacterial gastroenteritis.(12,13,14)Specific risk factors for the development of PI-IBS have been identified, including younger age, female sex, presence of severe infectious gastroenteritis for a prolonged period, use of antibiotics to treat this infection, and presence of concomitant psychological disorders (eg, anxiety).(12,13,15,16)
Small Intestinal bacterial overgrowth
Pimentel and colleagues(17,18)have shown that, when measured by the lactose hydrogen breath test (LHBT), small intestinal bacterial overgrowth (SIBO) has been detected in 78% to 84% of patients with IBS. Hence, a higher than usual population of bacteria in the small intestine has been proposed as a potential aetiological factor in IBS. While another study involving a review for the presence of gastrointestinal-related symptoms (including IBS) has shown that asensitivity of the LHBT for SIBO has been shown to be as low as 16.7%, and specificity approximately 70% and the test alone for small intestinal bacterial overgrowth were poor. Hence, combination with scintigraphy resulted in 100% specificity to assess the treatment responce, because double peaks in serial breath hydrogen concentrations may occur as a result of lactulose fermentation by cecal bacteria. (19,20)
Food intolerance :
Approximately 60% of IBS patients believe and different studies show that allergy to certain foods could trigger IBS symptoms. Recent research involving exclusion of foods patients had immunoglobulin (Ig) G antibodies, which are associated with a more delayed response after antigen exposure than IgE antibodies, resulted in significantly better symptom improvement than in patients in the non-exclusion group.(21)
Serotonin signaling in Gastrointestinal (GI) tract:
Normal gut physiology is predicated to be an interaction between the GI musculature and the autonomic nervous system (ANS), and central nervous system (CNS) by the neurotransmitter serotonin (5-hydroxytryptamine [5-HT]) . Impairment in this interaction affects GI motility, secretion, and visceral sensitivity leading to the symptoms associated with IBS .(22)
Preliminary steps toward making a positive diagnosis of IBS:
A careful history and physical examination are frequently helpful in establishing the diagnosis. A variety of criteria have been developed to identify a combination of symptoms to diagnose IBS. Different guidelines from different studies help in making a positive diagnosis of IBS based primarily on the pattern and nature of symptoms, without the need for excessive laboratory testing. In 1978, Manning and colleagues(23,24) proposed diagnostic criteria for IBS that were found to have a reasonable sensitivity of 78% and a specificity of 72%.(1)In 1984, Kruis and colleagues developed another diagnostic criteria with a high sensitivity of 77% and a specificity 89%. Likewise, in 1990 Rome I(25)criteria came with a sensitivity of 71% and specificity of 85%. RomeII(1999)(26)and Rome III(2006)(27)have not been evaluated yet. None of the symptom based diagnostic criteria have been evaluated and ideal reliability found.(1)
Summary of diagnostic criteria used to define IBS:(1)
In 1978, Manning defined IBS as a collection of symptoms, given below, but did not describe their duration. The number of symptoms that need to be present to diagnose IBS was also not reported in the paper, but a threshold of three positive is the most commonly used:
a) Abdominal pain relieved by defecation
b) More frequent stools with onset of pain
c) Looser stools with onset of pain
d) Mucus per rectum
e) Feeling of incomplete emptying
f) Patient-reported visible abdominal distension
Kruis in 1984, defined IBS by a logistic regression model that describes the probability of IBS. Symptoms need to be present for more than two years. Symptoms are as follows:
a) Abdominal pain, flatulence, or bowel irregularity
b) Description of character and severity of abdominal pain
c) Alternating constipation and diarrhea
Signs that exclude IBS (each determined by the physician) :
a) Abnormal physical findings and/or history pathognomonic for any diagnosis other than IBS
b) Erythrocyte sedimentation rate >20 mm/2 h
c) Leukocytosis >10,000/cc
d) Anaemia (Hemoglobin < 12 for women or < 14 for men)
e) Impression, the physician could perform a PR and see blood or the patient may report it.
Again in 1990, Rome I defined IBS as abdominal pain or discomfort relieved with defecation, or associated with a change in stool frequency or consistency, PLUS two or more of the following symptoms on at least 25% of occasions or days for three months:
a) Altered stool frequency
b) Altered stool form
c) Altered stool passage
d) Passage of mucus
e) Bloating or distension
Rome II, in 1999, redefined the criteria as abdominal discomfort or pain that has two of three features for 12 weeks (need not be consecutive) in the last one year.
a) Relieved with defecation
b) Onset associated with a change in frequency of stool
c) Onset associated with a change in form of stool
Recently , Rome III (2006) defined IBS as recurrent abdominal pain or discomfort three days per month in the last three months associated with two or more of:
a) Improvement with defecation
b) Onset associated with a change in frequency of stool
c) Onset associated with a change in form of stool
The role of routine diagnostic investigation in patients with IBS:
Routine diagnostic investigation is based on the age of the patient, family history of selected organic diseases including colorectal cancer, inflammatory bowel disease(IBD), coeliac sprue and the presence of ‘alarm’ features(table1), such as rectal bleeding, weight loss, iron deficiency anaemia and nocturnal symptoms.(1) In patient with typical IBS symptoms and no alarm features, routine diagnostic investigation (complete blood count, serum chemistry, thyroid function tests, stool for ova and parasites and abdominal imaging) is not recommended(1)because of a low likelihood of uncovering organic disease.
Table-1 Lists of alarm features:
Rectal bleeding
Weight loss
Iron deficiency anaemia
Nocturnal symptoms: abdominal pain
family history of of selected organic diseases: colorectalcancer, Inflammatory Bowel Disease(IBD), celiac sprue
Summary of diagnostic investigation in patient with IBS : (1,2)
Diagnostic Investigations:
Routine serologic screening for coeliac sprue for patients with IBS-D and IBS-M.
Lactose Breath test done in lactose maldigestion despite dietary modification.
Colonoscopic Imaging done in IBS patient (>50 yrs age) with alarm feature to rule out organic diseases and screening of colorectal cancer.
Colonoscopy with random biopsies taken in IBS-D to rule out microscopic colitis.
Management of IBS:
The goal of IBS management is to provide relief of symptoms and improve overall well-being.(28)Most studies use a combination therapy including patient education and psychological therapies, diet and fibre therapy along with different types of new emerging pharmacological therapies.
Patient education and psychological therapies:
The majority of patients with IBS have anxiety, depression and features of somatization. Psychological therapies, including cognitive behavioral therapy, dynamic psychotherapy, hypnotherapy(1)shed new light on the management of patients with IBS. The outcome of psychological therapies is improved when delivered by a trained professional (physician, occupational therapist, nurse).(29) A study by Guthrie(30)showed that psychological therapy is feasible and effective in two thirds of patients with IBS who do not respond to standard medical treatment.
Role of diet in IBS:
The concept of food intolerance and the consequent elimination of certain foods from the diet benefit symptoms of IBS. However, there is no sufficient evidence to support this.(1)
Therapeutic effectof dietary fibre, bulking agents and laxatives:The quality of evidence supporting the recommended use of dietary fibre or bulking agents to regularize bowel function is poor.(31)Ispaghula husk(Psyllium hydrophilic mucilloid ) and calcium polycarbophil are moderately effective and can be given a conditional recommendation because of the weakest type of evidence.(1) Polyethylene glycol(PEG) laxative has a role in improving stool frequency but no effect on abdominal pain. Different clinical studies and expert opinion suggest that increased fibre intake may cause bloating, abdominal distension and flatulence.(32)So gradual adjustment of dose is advised for the use of these agents.
Therapeutic effectof antispasmodic agents including peppermint oil:
Certain antispasmodics (hyoscine, cimetropium,and pinaverium and peppermint oil) may provide short-term relief of abdominal pain/discomfort in IBS.(33,34)Evidence for safety and tolerability
Agent
Mechanism of action
Targeted disorder
Clinical status
Crofelemer
CFTR
IBS-D
Phase2b complete
Linaclotide
Guanylate cyclase-c agonist
IBS-C
Phase 3
Arverapamil
Calcium channel blocker
IBS-D
Phase 3
Asimadoline
Kappa opioid agonist
IBS
Phase 2b complete
Mitemcinal
Motilin receptor agonist
IBS-C
Phase 2
Ramosetron
5-HT 3 antagonist
IBS-D
Phase 3
TD-5108
5-HT 4 agonist
IBS-C
Phase 2
DDP-773
5-HT 3 agonist
IBS-C
Phase 2
DDP-225
5-HT 3 antagonist and NE reuptake inhibition
IBS-D
Phase 2
BMS-562086
Corticotropin-releasing hormone antagonist
IBS-D
Phase 2
GW876008
Corticotropin-releasing hormone antagonist
IBS
Phase 2
GTP-010
Glucagon-like peptide
IBS pain
Phase 2
AGN-203818
Alpha receptor agonist
IBS pain
Phase 2
Solabegron
Beta-3 receptor agonist
IBS
Phase 2
Espindolol (AGI-011)
Beta receptor antagonist
IBS (all subtypes)
Phase 2
Dextofisopam
2,3 benzodiazepinereceptors
IBS-D and IBS-M
Phase 3
Table 1: Source: ACG Task Force on IBS(2009)
of these agents are very limited.The commonest adverse effects are dry mouth,dizziness and blurred vision.(34-36)
Therapeutic effectof anti-diarrhoeal medications:
The anti-diarrhoeal agent ‘Loperamide’ is effective at slowing down colonic transit and improving stool consistency for the treatment of IBS-D with no severe adverse effects.(37)But safety and tolerability datas are still lacking in many studies.
Therapeutic effect of antibiotics:
Many studies show well tolerance of a short term course of non-absorbable antibiotics (Rifaximin) is most effective for improvement of global symptoms in IBS-D and IBS patient with the predominant symptom of bloating and other associated symptoms, such as diarrhoea and abdominal pain.(38-40) While, the Unted States Food and Drug Administration (FDA or USFDA) approved Rifaximin for treatment of traveler’s diarrhoea. Other antibiotics, Neomycin(41), Clarithromycin and Metronidazole(42)have been well evaluated for the management of IBS.
Therapeutic effect of Probiotics:
Probiotics have a large number of properties that can benefit IBS. Bifidobacteria is the active agent in probiotic combination therapy.Whereas many studies show Lactobacilli to have no impact on symptoms.(43)But one Korean study concluded that thecomposite probiotics containing Bifidobacterium bifidum BGN4, Lactobacillus acidophilus AD031, and other species are safe and effective, especially in patients who excrete normal or loose stools.(44) Recently, P Moayyedi and colleague in their systematic review recommend that probiotics appear to be efficacious in IBS patients ,but the magnitude of benefit and the most effective species and strain are uncertain.(45)
Therapeutic effect of the 5HT3 receptor antagonists:
Alosetron (5-HT3 receptor antagonists), with dosage of 0.5 to 1 mg daily, is more effective and the commonest drug used for treatment of patients with IBS-D in spite of serious side effects including constipation and colon ischemia.The balance model of benefits and harms for ‘Alosetron’ is most encouraging in women who have not responded to conventional therapies.(46,47)
Therapeutic effect of 5-HT4 receptor agonists:
Tegaserod (5-HT4 receptor agonist) is more effective for the treatment of IBS-C mostly in female and IBS-M. The side effects reported among the patient receiving Tegaserod are diarrhoea (commonest), cardiovascular events i.e. myocardial infarction, unstable angina, or stroke.(48,49)Currently Tegaserod is available from FDA through an emergency investigational new drug protocol. Other 5-HT4 agonists (Cisapride,Renzapride) have not demonstrated improvement compared with placebo.(50,51)
Therapeutic effect of the selective C-2 chloride channel activators:
Lubiprostone (selective C-2 chloride channel activator) is effective for relieving symptoms of IBS-C, mostly in women, and has less frequent side-effects including nausea(8%), diarrhea(6%) and abdominal pain(5%).(52)
Therapeutic effect of antidepressants :
Patients with prominent symptom of abdominal pain in IBS that fails to respond to peripherally acting agents often are considered for treatment with antidepressants (TCAs and SSRIs), however, limited data on safety and tolerability of these agents is shown.(53)Antidepressants have the combined effect of both central and peripheral mechanism in IBS.(54)SSRIs are better tolerated than TCAs and have a prokinetic effect hence work better in IBS-C.(53,55)whereas TCAs are of greater benefit for IBS-D.
Therapeutic effect of herbal therapies and acupuncture:
Unique Chinese herbal mixtures show a benefit in IBS management.(56) Traditional Chinese herbal remedies are routinely used in China to treat the condition, but so far have not been generally accepted by conventional Western medicine.(56,57)Bensoussanand colleague in one randomized, double-blind, placebo-controlledtrial concluded that the Chinese herbal formulations appear to offerimprovement in symptoms for some patients with IBS.(57) A systematic review of different trials of acupuncture was inconclusive because of heterogenous outcomes.(58,59) Hence further work is needed before any recommendations on acupuncture or herbal mixtures therapy.
Emerging therapies :
The improved understanding of underlying mechanisms in IBS is beneficial for the development of new pharmacological treatment options.
A brief overview of emerging agents in IBS therapy summarized in Table 1(1)
Conclusion:
IBS is a true medical disorder that has significant impact on those in agony with regard to symptom severity, disability, and impaired quality of life, which exceeds that of most GI disorders. Advances in research over the past several decades have paved the way for anameliorableunderstanding of the underlying pathophysiology and standardized symptom-based approaches that can be implemented in making a positive diagnosis and development of innovative treatment options for multiple IBS symptoms. Although many unanswered questions remain, the progress is promising and it has equipped physicians better to efficiently diagnose IBS and choose from a growing armamentarium of treatment options.
There has been a concerted attempt by government to engage doctors in management and the importance of medical management in psychiatry has never been greater. This commenced with the Griffiths Report on management within the National Health Service1 (NHS) but had renewed emphasis 25 years later in Lord Darzi’s report.2 The NHS Next Stage Review Final Report ‘High Quality Care for All’ sets out a vision for an NHS with quality at its heart. It places a new emphasis on enabling NHS staff to lead and manage the organisations in which they work. It pledges to incorporate leadership and management training into postgraduate medical curriculum. The proposal that management training should be integral to the training of all doctors, including psychiatrists, is not new.3, 4
Although management as a component of training for doctors is generally accepted, new consultants are often poorly prepared to deal with the complex organisational issues involved in taking on managerial responsibility.5, 6 This is partly to do with prior training and partly because learning in this area needs to be based on experience. It is essential that they be adequately prepared to fulfil the responsibilities. Recent psychiatric literature has pointed to the need for psychiatrists to have skills to develop their management and leadership roles and has called for more than ‘on the job training.’7
Management training for trainees – why?
It is important to recognise that all doctors will have some management responsibilities and it is a requirement of all doctors to fulfil these duties effectively as part of appraisal and revalidation. Medical training has traditionally focused on the clinical skills necessary to be a safe and competent clinician. It is increasingly important that doctors are not only competent clinicians but also have the skills to enable them to function efficiently and effectively within a complex healthcare system.
The aim for the doctor in training is to develop management skills in readiness to take on the responsibilities of a consultant. The management role of consultants is becoming more widely accepted and continually increasing, e.g. this may involve responsibility for teams, people, and the resources they use.8 Furthermore, the changing role of consultant psychiatrists calls for consultants to have skills to fulfil management and leadership roles.9 However, while not always recognised, all doctors including trainees are required to achieve some managerial functions from an early stage in their careers. Acquisition and application of leadership and management skills will enable them to contribute to the effective delivery of healthcare for patients.
The fast pace of change within healthcare provision means that it is important that current trainees have the appropriate skills for effective delivery of healthcare.10 It is clearly no longer acceptable that development of management and leadership competencies is left as optional.
What are the competencies that we need to acquire?
Leadership and management are a key part of a doctor’s professional work and the development of appropriate competencies needs to be an integral part of a doctor’s training and development. The objectives of the skills of all psychiatrists in training has relied on a number of documents which include Good Medical Practice11 produced by the General Medical Council (GMC), Good Psychiatric Practice12 produced by the Royal College of Psychiatrists, and the Medical Leadership Competency Framework (MLCF).13 The Royal College of Psychiatrists recognise that psychiatrists will need to acquire a basic level of management skill, and this is reflected in the curriculum which outlines the knowledge and experience to be gained during specialty training.
The intended learning outcomes for trainees are to demonstrate the ability to work effectively with colleagues including team-working, developing appropriate leadership skills, and demonstrating the knowledge, skills and behaviours to manage time and problems effectively.14 Furthermore the MLCF describes the leadership competencies that doctors need to acquire (Box 1). The MLCF was introduced in response to the recognised need to enhance medical engagement in leadership and was jointly developed by the Academy of Medical Royal Colleges, GMC and the NHS Institute for Innovation and Improvement.15
Box 1: Leadership competencies to begained during speciality training
1. Demonstrating personal qualities
Developing self awareness
Managing yourself
Continuing personal development
Acting with integrity
2. Working with others
Developing networks
Building and maintaining relationships
Encouraging contribution
Working within teams
3. Managing services
Planning
Managing resources
Managing people
Managing performance
4. Improving services
Ensuring patient safety
Critically evaluating
Encouraging improvement and innovation
Facilitating transformation
5. Setting direction
Identifying the contexts for change
Applying knowledge and evidence
Making decisions
Evaluating impact
How to attain competencies in management and leadership - formal qualifications Versus ‘On the job training’
It is important to realise that the acquisition of management competencies is an ongoing experience which starts early in one's career. Any trainee embarking on management training should consider very carefully the alternatives, assess their needs, and determine their own aims and objectives. It is often necessary to choose and tailor an individual training package. We share our experiences of two routes that can lead the trainee to acquire the relevant skills. For the convenience of the reader we will discuss these under the headings of ‘formal qualifications’ and ‘on the job training.’
Formal qualifications (MSc in Health and Social Care Management)
There are many advanced courses on offer, leading to a management qualification, usually lasting several years. Some of these courses are MBA (Health Executive), MSc in Health and Social Care Management, MSc in Health and Public Leadership, Masters degree in Medical Leadership, and Masters in Medical Management.
We (OW and AS) are pursuing an MSc in Health and Social Care Management, through the Faculty of Health and Applied Social Sciences in Liverpool John Moores University, on a part-time basis using our dedicated special interest time (six sessions per month). This degree has been specifically designed to provide all health and social care professionals the opportunity to develop their knowledge and skills to facilitate their role as managers. The programme is structured in such a way as to facilitate the part-time student and enhance their learning experience.
The MSc is modular in structure. In the first year the student will undertake three core management modules. In the second year the student will undertake a research methods module, management module and an individual work-based project. The final year culminates in a dissertation involving a significant piece of research. The student can choose to register for CPDs and there is an option to exit after one year (60 credits) with a Postgraduate Certificate or after two years (120 credits) with a Diploma. University regulations allow students to gain credit for demonstration of relevant prior learning, whether certificated or not. The course format is shown in Box 2.
The ratio of coursework, in-house teaching and self-directed learning varies between modules. Each module usually requires half to one-day attendance of in-house teaching per week. The programme uses a variety of assessment procedures that include a written assignment of 2000–5000 words, video role-play, seminar presentations and work-based projects. Completion of the assignments represents the greatest challenges to time and requires commitment and motivation.
Box 2: Format of the MSc in Healthcare Managementat the Liverpool John Moores University
·Improving service delivery through human resource management (20 credits)
·The economics of World Class commissioning (20 credits)
·Advancing leadership for quality (20 credits)
·Research methods and data analysis (30 credits)
·Strategic management and entrepreneurship (20 credits)
·Individual study or work based learning (10 credits)
·Dissertation (60 credits)
Strengths and weaknesses of an MSc in Health and Social Care Management
Whilst on the course we were able to learn a variety of concepts that were completely new to us, but the main challenge was to put them into practice. As part of the course we had to work on management related projects in our workplaces, so that we could apply the learnt concepts in real time.
We believe that the MSc course has undoubtedly improved our understanding of team working and leadership whilst working on a work-based project. The projects were specific supervised experiences linked to key developmental objectives and enhanced our problem-solving and decision-making, the ability to analyse and reflect on situations, as well as the expected understanding of resource management and change management.
We have been able to analyse personal development needs to enhance personal effectiveness and leadership skills. It helped us to critically evaluate the impact of action learning for organisational development. We have gained an insight into the concepts of commissioning and the role of economic evaluation. We were able to critically appraise the impact of government policies on the commissioning process. Our skills and knowledge of human resource management within a framework of contemporary policy context has increased. We really do feel that it has improved our insight into change management.
We hope that completing a significant research project within an academic setting will further develop our research skills. So far it has been a valuable and stimulating experience that has provided us with both skills and knowledge in management. The teaching and learning approaches for all modules draw into the experiences of the workplace. All core module assessment tasks are linked to the workplace, which is particularly useful.
However the process of developing a dissertation proposal, finding a supervisor, gaining ethical approval and proceeding with the research is time consuming and at times frustrating. The financial cost is a significant consideration but can be partially funded through the study leave budget. Furthermore, there is funding available for some modules through the Strategic Health Authority. As we were using most of our special interest sessions to pursue the degree, we had to put an extra effort to develop additional clinical interests.
‘On the job training’- what does that mean?
On the job management training may entail clinical managerial experience (e.g. organising outpatient clinics, developing systems for prioritising clinical work, managing teams, and drawing up on-call rotas), specific skills (e.g. chairing meetings, organising training days, and representation on committees), specific management experience (e.g. participation in service development) and resource management (non-clinical aspects of management such as human resources and finance).
The clinical setting provides many opportunities to gain knowledge, skills, attitudes and behaviours that are identified in the management and leadership curriculum. The diversity of daily clinical practice will enable the acquisition of appropriate skills and trainees need to take advantage of all the formal and informal learning opportunities. These range from workplace-based ‘learning sets’16 and project based learning. It is the responsibility of the trainers to ensure adequate and appropriate educational opportunities are made available to the trainee. In turn the trainee should be enthusiastic and proactive in identifying their own gaps in knowledge, skills, attitudes and behaviour.
It is important to bear in mind that such training should be supplemented by selected formal courses. Some training schemes offer no organised management training, whilst some provide training as a short and often intense course.17 A variety of courses have been developed for trainees, both at regional and national level. Trusts, Deaneries, independent organisations, universities and the Royal Colleges run such courses. These courses are normally short, lasting a week or less. The components of ‘on the job training’ in Merseycare NHS Trust and generic management courses offered by Mersey Deanery are listed in Boxes 3 and 4 respectively.
Box 3: Components of ‘on the job training’ in Merseycare NHS Trust
Appropriate involvement of trainees in clinical teams
Appropriate involvement of trainees in service development
Shadowing arrangements in placements
Undertake a management project
Senior managers in the trust as mentors to trainees
Action learning sets for trainees
Trainees developing teaching and supervisory skills with junior colleagues
Management seminars
Representation on committees (e.g. school board, local negotiating committee, local education board etc)
Two-day and three-day residential management training for higher trainees
Generic management courses runby the deanery
Personal development and management courses hosted by the College
Box 4: Generic management courses offered by Mersey Deanery
Management and leadership
Mentoring, appraisal, interview skills
Effective team-working
Managing change
Time management
Preventing and managing stress
Negotiating skills
Managing meetings
Strengths and weaknesses of ‘on the job training’
‘On the job training’ may vary from one placement to another depending on the availability of resources and mentors. Achieving ‘on the job’ management experience depends on the enthusiasm of the senior trainee. It is more personalised and individually driven. Higher training posts do provide exposure to management issues, but do not necessarily provide in-depth management experience.
It is easier to gain experience in clinical management skills but it can be difficult to achieve specific management experience including resource management. Trainers with formal management roles do not routinely engage trainees in this aspect of their work, and similar experiences have been expressed in other training schemes.18 Even if there are opportunities available to get involved in service development and other operational issues, one may struggle to commit any time.
Furthermore the loss of protected training (reduction of special interest to only two sessions for specialist trainees) to service provision has impacted on training.19 The formal courses are confined to development of skills such as leadership, teamwork and management of conflict. Residential management courses are available, providing one week or less of intensive training. The amount of management theory and techniques that can be learned on such courses is limited. The limited theoretical training in management means that trainees are unlikely to be adequately prepared for the extensive management role.
Which one is for you?
Managing services and leading organisations is not for everyone. Nevertheless, the medical role has inherent elements of leading and managing patient care and therefore doctors are often involved in service improvement and development. Perhaps the key issue is whether qualifications alone are sufficient to equip a doctor to be an effective manager, or is experience simply enough? It is important to remember that management qualifications tend to involve real-time application of concepts (which may be the same as on the job training) but at the same time gives a solid knowledge base. Furthermore, limited experience (involvement in local management) is unlikely to be sufficient and therefore experience should ideally be supplemented by selected formal courses.
However, even with the most impressive portfolio of formal training, trainees will nevertheless have to demonstrate competence in leadership and management in their work. All trainees are adult learners who ought to take responsibility for their own education. Which route the trainee wants to take depends not only on what the trainee intends to do in his future role but also on where he trains and what resources are available. Training needs will differ depending on past experience, competence, and capabilities. It is important for the trainees to recognise that the training needs will differ depending on their interests and the type of consultant post to which they aspire.
Formal qualifications would suit those with a well-developed interest in management and a desire to make this a significant part of their ongoing career. If the trainee intends to take a lead management role it may be necessary and useful to complete a Master’s degree. It will provide the trainee with both skills and knowledge in management and a well-recognised and formal degree in management. Having established that, it is worthwhile appraising the variety of courses available, as they vary significantly. It is helpful to determine the course’s content, assess its relevance, and establish how much in-house teaching and self-directed learning is expected. For those who want to acquire management skills for better day-to-day functioning in their job, it is useful to analyse their personal development needs and complete relevant modules according to these needs. This could be attained through ‘on the job training’ if resources can be identified and secured. A final point to bear in mind is the Royal Colleges’ direct contribution to developing management and leadership in trainees. For example the Royal College of Psychiatrists promotes engagement of doctors in management and has a dedicated Special Interest Group for management.
Syncope is a common condition encountered in acute medical practice. Many patients with syncope are initially labelled as having “collapse query cause”. It is defined as transient loss of consciousness (T-LOC) due to transient global cerebral hypoperfusion characterized by rapid onset, short duration, and spontaneous complete recovery1. Incidence of syncope is difficult to determine accurately as many cases remain unreported. Some studies quote an overall incidence rate of a first report of syncope to be 6.2 per 100 person-years. Clearly this is age related and the incidence increases dramatically in patients over the age of 70 years2. Syncope accounts for 1-6% of hospital admissions and 1% of emergency department (ED) visits per year3-5. Hospital episode statistics from NHS hospitals in England reported a total of 119,781 episodes of collapse/syncope for the financial year 2008-09 which is about twice the number of episodes reported in the year 1999-2000. About 80% of patients were admitted and they have an average length of stay of 3 days accounting for over 269,245 bed days during that financial year6.
Syncope is also associated with significant mortality and morbidity if left untreated. Literature reports a 6-month mortality of 10%, which can go up to 30% if cardiac syncope is untreated7. Non-cardiac syncope is associated with a survival rate comparable to people with no syncope2. Syncope is also a risk factor for fractures related to falls especially in elderly and can cause significant morbidity in this group8. In addition, there are significant health care related costs associated with management of syncope. Cost per diagnosis can vary from over £611 in the UK to €1700 in Italy. Hospitalisation alone accounted for 75% of cost in some studies9,10. Diagnosis of this condition can be difficult especially if there is a lack of structured approach. Over the last few years this topic has attracted enormous interest and several studies have been published, aiming at improving the approach to this condition. Standardised syncope pathways improve diagnostic yield and reduced hospital admissions, resource consumption and over all costs10. Recently the task force for the diagnosis and management of syncope of the European Society of Cardiology published guidelines for the diagnosis and management of syncope1. However, in spite of the available evidence very few hospitals have standardised syncope pathways for the management of this complex condition. Only 18% of EDs have specific guidelines and access to a specialist syncope clinic11. This article focuses on evidence based structured evaluation of syncope. Current practice in the management of syncope Due to the difficulty in diagnosis and mortality associated with this condition, a cautious approach may be taken by physicians resulting in hospitalisation of majority of patients presenting with syncope. We recently audited the practice of syncope in our hospital, which is a tertiary centre in the north of Scotland. 58 patients admitted with this condition over a period of a month were included in the audit. It showed an average length of stay (LOS) of 4.76 days in these patients. Due to a lack of methodical approach and standardised pathway for management of this condition many patients were subjected to several inappropriate inpatient investigations significantly prolonging the LOS and increasing the cost. Only 7 (12%) cardiac events were observed in this group and in retrospect a good methodical approach would have predicted these events. It should be noted that even in the geriatric population, reflex syncope that carries a benign prognosis is more common than cardiac syncope2. A systematic approach to the management of syncope (Figures 1 and 2). The causes of syncope can be broadly divided in to cardiac causes and non-cardiac causes (Table 1). Initial evaluation leads to a diagnosis in less than 50% patients in most instances4,12-14. If there is uncertainty about diagnosis then the patient is risk stratified. High-risk patients are hospitalised, evaluated and treated whereas early discharge could be considered in low risk patients. Aetiology of Syncope41
Neurally-mediated (Reflex) Syncope
Cerebro vascular
Vasovagal syncope
Carotid sinus syncope
Situational syncope
e.g., Micturition, post prandial, defecation, cough
Relevant blood tests (e.g. to rule out metabolic abnormality)
Pacemaker check if appropriate
History Many patients with syncope are initially labelled as having “collapse query cause”. Loss of postural tone is termed “collapse”. Indeed, the term “collapse query cause” does not give any useful information regarding the underlying condition. A clear history from the patient and the bystander or witness (if available) is the key to the diagnosis. Firstly, determine if the collapse was associated with loss of consciousness (LOC). LOC can be transient (T-LOC) or prolonged. Categorising “collapse” is important at this stage as the aetiology and approach to each category is different (Figure 1). Secondly, establish if the collapse was syncopal. The LOC should be transient (e.g. did the patient regain consciousness in the ambulance, before or on arrival to hospital?), of rapid onset and associated with a spontaneous complete recovery. Also the mechanism should be due to transient global hypoperfusion. T-LOC secondary to other mechanisms such as trauma and brief seizures should be excluded. On occasions syncope could be associated with brief jerking movements mimicking seizures15. Also note that a transient ischemic attack (TIA), commonly listed as a differential diagnosis of syncope by physicians, is not a cause of syncope as this is not associated with global cerebral hypoperfusion. The absence of a coherent history because patient had no recollection of events and there was no witness account available can make this distinction difficult. This is also particularly difficult in the elderly with cognitive impairment. Other useful information includes whether the syncope was associated with postural change. Orthostatic hypotension occurs after standing. If present it will be useful to check drug history (new vasodepressive drugs). Features suggestive of Parkinson’s disease or amyloidosis may raise the possibility of autonomic neuropathy. A strong family history of sudden cardiac death may be of relevance. Table 3 summarises the features of neurally mediated and cardiac syncope. Table 3 Features suggesting neurally mediated and cardiac syncope42
Neurally mediated
Cardiac
Preceded by prodrome
Related to particular activity - e.g., Micturition, postprandial, prolonged standing, unpleasant situations
Associated with nausea and vomiting
After exertion
Absence of prodrome, no warning
Associated with chest pain, breathlessness, palpitation
During exertion or supine
History of cardiac disease
Family history of sudden cardiac death
Physical examination The next step is a thorough physical examination. This should include an ABC approach if the patient is very ill and particular attention should be given to exclude immediate life threatening conditions such as pulmonary embolism, acute myocardial infarction, life threatening arrhythmias, acute aortic dissection, seizures etc… Recording the vital signs is important as it may give a clue to diagnosis (e.g., acute hypoxia may indicate massive pulmonary embolism). Recording postural blood pressure when lying and during active standing for 3 minutes is useful to exclude orthostatic hypotension1. Recording a deficit in blood pressure in both arms may be a useful clinical finding especially if acute aortic dissection is suspected. Thorough cardio respiratory examination may reveal an obvious condition such as cardiac failure or aortic stenosis. Patients should also be examined for potential injuries as a result of syncope. Standard ECG A 12 lead ECG should be performed in all patients admitted with syncope. The abnormalities in table 4 would suggest a cardiac aetiology. The QT interval should always be measured, as it is a commonly overlooked abnormality. Blood tests Blood tests are usually unhelpful in establishing a diagnosis but can detect metabolic abnormalities such as hypoglycaemia, electrolyte abnormalities and other causes to explain LOC especially when witness account is not available. An acute drop in haemoglobin suggests blood loss. One recent study claims the usefulness of brain natriuretic peptide (BNP) for predicting adverse outcomes in syncope but it is not externally validated yet and it is too early to recommend for routine clinical practice16. Pacemaker check It is not uncommon to see a patient with a pacemaker implanted, admitted to hospital with syncope. In these circumstances, it is essential to rule out a device malfunction although this is not a common cause of syncope. A preliminary and easy test will be interrogating the pacemaker if available. This should pick up any problems with the pacemaker in most instances. With the above information establishing a diagnosis will be possible in a significant proportion of patients. Further investigations and management should be guided by the underlying diagnosis1. However in over half of patients the diagnosis may still be uncertain12,13,17. The following section explains the management of unexplained syncope. Risk stratification in patients with unexplained syncope (Tables 4 and 5)Table 4 ECG changes in ‘high-risk’ Syncope41
ECG changes favouring bradyarrhythmias
High degree AV blocks – Mobitz type 2 second degree AV block, complete heart block, trifascicular block (first degree heart block with left bundle branch block (LBBB) or right bundle branch block (RBBB) with axis deviation)
Bifascicular block (defined as either LBBB or RBBB combined with left anterior fascicular block or left posterior fascicular block) especially if new
Other intraventricular conduction abnormalities (QRS duration >0.12 s)
Asymptomatic sinus bradycardia (<50 bpm), sinoatrial block or sinus pause >3 s in the absence of negatively chronotropic medications
ECG changes favouring tachyarrhythmias
Pre-excited QRS complexes (e.g. WPW syndrome)
Prolonged QT interval
Right bundle branch block pattern with ST-elevation in leads V1–V3(Brugada syndrome)
Negative T waves in right precordial leads, epsilon waves and ventricular late potentials suggestive of arrhythmogenic RVD
Q waves suggesting myocardial infarction
Non sustained Ventricular Tachycardias
Table 5 – Clinical features of high-risk syncope1,18-23
History of severe structural heart disease or heart failure, presence of ventricular arrhythmia
Syncope during exertion or supine
Absence of prodrome or predisposing or precipitating factors
Preceded by palpitation or accompanied by chest pain or shortness of breath
Family history of sudden cardiac death
Examination suggestive of obstructive valvular heart disease
Syncope associated with trauma
Systolic blood pressure less than 90mm Hg
Hematocrit less than 30% (acute drop in hemoglobin)
When the cause of syncope is uncertain it is essential to risk stratify patients to enable appropriate treatment and further investigation. Risk stratification tools There are several scoring systems for risk stratification of syncope. Syncope Evaluation in the Emergency Department Study (SEEDS), Osservatorio Epidemiologico sulla Sincope nel Lazio (OESIL score), Evaluation of Guidelines in SYncope Study (EGSYS score), San Francisco Syncope Rule (SFSR), The Risk stratification Of Syncope in the Emergency department (ROSE) and American College of Emergency Physicians clinical policy are the popular ones and each has its own advantages and disadvantages1,16,18-23. Discussing each scoring system is beyond the scope of this article and we shall restrict the discussion to the summary of these risk stratification tools (Table 5). It will be too early to include all the factors mentioned in the ROSE study, as it is not externally validated yet. It could be argued that taking all the risk factors described may increase admission rates but this approach may at least not miss the high-risk patient. This is a developing field and more evidence is likely to be published soon. High-risk vs. low-risk syncope: A high-risk syncope patient is the one where a cardiac cause is likely and where the short-term mortality is high due to major cardiovascular events and sudden cardiac death. High-risk syncope is said to be present if any of the features in the table 4 or 5 are present. Management of low-risk syncope Patients with a single or very infrequent syncope are usually reassured and discharged, as the short-term mortality is low1,2. Tilt table test is not usually required where a single or rare episode of neurally mediated syncope is diagnosed clinically. One exceptional circumstance where single rare episodes are investigated further with a tilt table test is when there could be an occupational implication (e.g. aircraft pilot) or if there is a potential risk of physical injury. Patients with recurrent unexplained syncope need to be further investigated (see below). Management of high-risk syncope / suspected cardiac syncope High-risk patients usually require hospitalisation and inpatient evaluation. Other high-risk patients who may be considered for admission are vulnerable patients susceptible to serious injuries, for example, elderly patient or a patient with multiple co-morbidities. Further investigations (Table 6)
Non invasive
Invasive
Echocardiography
ECG monitoring
Telemetry
Holter monitoring
External loop recorder*
Carotid sinus massage
Cognitive testing (in elderly)
Ambulatory blood pressure monitoring
Tilt table test*
Exercise stress test
Implantable loop recorder*
Coronary angiography*
Electrophysiology*
* Specialist Investigation Echocardiography Echocardiography is a relatively inexpensive and non-invasive investigation. It should be performed if there is a clinical suspicion of a significant structural abnormality of heart such as ventricular dysfunction, outflow tract obstruction, obstructive cardiac tumours or thrombus, pericardial effusion etc… The yield of this test is low in the absence of clinical suspicion of structural heart disease. However in the presence of a positive cardiac history or an abnormal ECG, one study detected LV dysfunction in 27% of patients and half of these patients had syncope secondary to an arrhythmia. In patients with suspected obstructive valvular disease 40% had significant aortic stenosis as a cause of syncope24. ECG monitoring These tests have utility in identifying arrhythmogenic syncope. If a patient has syncope correlating with a significant rhythm abnormality during the monitoring period with the device, then the cause of syncope is due to the underlying rhythm abnormality. On the other hand, if no rhythm abnormality is recorded during a syncopal attack, then an underlying rhythm problem as a cause of syncope is excluded. Therefore, these tests are meaningful only if there is a symptom-rhythm correlation, which is the working principle of these devices. In the absence of syncope, during the monitoring period, these tests may pick up other abnormalities that may be relevant. For example, rapid prolonged supra-ventricular tachycardias, ventricular tachycardias, periods of high degree AV blocks (mobitz type 2 or complete heart block) or significant sinus pauses >3seconds (except during sleep, negatively chronotropic therapy and trained athletes), which will require further investigation or treatment. Telemetry Telemetry can be used in inpatients. Although the diagnostic yield of this investigation is only 16%, given the high short-term mortality, this test is indicated in the high-risk group 1. Usually patients are monitored for 24 to 48 hours although there is no agreed standard period for monitoring25. Holter monitoring This involves connecting the patient through cutaneous patch electrodes. It records the ECG activity conventionally over 24-48 hours or at times up to 7 days. It is particularly useful only in patients who have frequent regular symptoms (≥1 per week). For this reason, the yield of this test can be as low as 1-2% in unselected population1. Long inpatient waiting lists in some hospitals can significantly prolong the length of stay and cost. Selecting patients carefully for this test based on risk stratification will reduce costs and waiting lists. Carotid sinus massage This simple bedside test is indicated in patients over the age of 40 years with syncope of unexplained origin after initial evaluation. A ventricular pause lasting >3 s and/or a fall in systolic BP of >50mmHg defines carotid sinus hypersensitivity (CSH) syndrome. It is contraindicated in patients with recent cerebrovascular accidents (past 3 months) or with carotid bruit except when a Doppler study has excluded significant stenosis1. Cognition test If an elderly patient had forgotten about the events, in the absence of an obvious cause, it may be useful to test cognition. If cognitive impairment is present, common problems associated with cognitive dysfunction should be considered e.g. falls, orthostatic hypotension. Other investigations In spite of the above tests if a cause is not determined, early specialist input is recommended for further investigation and treatment. The following non-invasive and invasive investigations may be appropriate in these circumstances. An external loop recorder This is a non-invasive form of electrocardiographic monitoring. The principle is same as that of Holter monitoring. External loop recorders have a loop memory that continuously records and deletes ECG. When activated by the patient, typically after a symptom has occurred, 5 – 15 min of pre-activation ECG is stored and can be retrieved for analysis. Studies have shown that they have increased diagnostic yield compared to Holter1. They should be considered in patients who have symptoms on a monthly basis. A Tilt table test This is indicated in cases of recurrent unexplained syncope after relevant cardiac causes of syncope are excluded and a negative Carotid sinus massage performed in the absence of contraindications. It is also indicated when it is of clinical value to demonstrate patients susceptibility to reflex syncope and thereby to initiate treatment. Other less common indications are recurrent unexplained falls, differentiate jerking movements secondary to syncope and epilepsy, diagnose psychogenic pseudo syncope and differentiate orthostatic and reflex syncope. Indication of this test in the context of a single unexplained syncope is discussed above. Ambulatory blood pressure monitoring This may be useful in patients with unexplained syncope particularly in old age to check if there is an element of autonomic failure and if a single set of orthostatic blood pressure recording is not helpful. In one study, it has been shown that 25% of the elderly patients admitted with falls or syncope had postprandial hypotension especially after breakfast26. It may be more readily available than a tilt table test in some centres. Exercise stress test This may be useful in a rare entity called exercise induced syncope. Outflow tract obstruction should be excluded by echocardiography before subjecting a patient to this test especially in the presence of relevant signs. However there is no evidence for supporting this test in investigating syncope in general population. Implantable loop recorders These are implanted subcutaneously. It needs to be activated either by the patient or a bystander after a syncopal attack. It is indicated in high-risk patients where a comprehensive evaluation did not establish an underlying diagnosis. In the absence of high risk factors, it is also indicated in patients with recurrent unexplained syncope especially if infrequent. Conventionally it is used as a last resort in patients with recurrent unexplained syncope as the initial costs are high. It has been shown in one study to be more cost effective than the conventional strategy and was more likely to provide a diagnosis in patients with recurrent unexplained syncope27. However patients with poor LV function and those at high risk of life-threatening arrhythmias were excluded from this study. Coronary angiography or CT coronary angiography This may be helpful in suspected myocardial ischemia or ischemia related arrhythmias. Electrophysiological study may be considered in certain circumstances by cardiologists. When a standardised pathway is used, diagnosis is ascertained in 21% patients on initial evaluation and further 61% patients with early investigations. Only in 18% patients the diagnosis was still uncertain12. Other studies have shown similar results28. Although these results are from a dedicated syncope unit following a standardised pathway, these could be extrapolated to any unit following these standardised pathways. Further management is dictated by the underlying diagnosis with early specialist input for appropriate treatment. Treatments Single or rare episodes of reflex syncope do not require treatment. However, recurrent troublesome reflex syncope may warrant treatment. Treatment modalities are primarily non-pharmacological such as tilt training, physical counter pressure manoeuvres (leg crossing, hand gripping) and ensuring adequate hydration29. If refractory to non-pharmacologic measures midodrine (alpha agonist) may be considered in patients with frequent hypotensive symptoms30,31. Fludrocortisone may be used in elderly but there is no trial evidence to support this. Betablockers have been presumed to lessen symptoms but are shown to be ineffective in several studies 32. They may potentially exacerbate bradycardia in carotid sinus syncope and are not recommended in treatment of reflex syncope. Treatment with cardiac pacing in reflex syncope is controversial and may be considered in patients with predominant cardio inhibitory response on carotid sinus massage (in CSH syndrome) or on tilt test (in reflex syncope). It should be noted that cardiac pacing has no effect on the often-dominant vasodepressor component of reflex syncope. In patients with orthostatic hypotension, non-pharmacologic measures like increased salt and water intake, head up tilt sleeping, physical counter pressure manoeuvres, abdominal binders and compression stockings may help reducing symptoms. Midodrine is an efficient alternative in these circumstances and fludrocortisone also can be used.33,34Syncope secondary to cardiac arrhythmias needs treatment if a causal relationship is established. Potential reversible causes such as electrolyte abnormalities and drug induced causes should be excluded. Cardiac pacing is a modality of treatment in significant bradyarrhythmias secondary to sinus node or advanced AV nodal disease such as mobitz type 2 block, complete heart block or tri-fascicular block. Catheter ablation and anti-arrhythmic drug therapy are the main modalities of treatment for tachyarrhythmias. Implantable cardioverter defibrillator may be indicated in patients susceptible to malignant ventricular tachyarrhythmias. Treatment of syncope secondary to structural cardio pulmonary abnormality will need surgical intervention if possible. Driving and Syncope Doctors are poor at addressing and documenting this issue35. Table 7 gives some useful information from the DVLA website (http://www.dft.gov.uk/dvla/medical/ataglance)36. This information is country specific and subject to change. Table 7 – Driving and Syncope in the UK36
Type of Syncope
Group 1 entitlement (car, motorcycle etc.,)
Group 2 entitlement (Large goods vehicle, passenger carrying vehicle)
Simple faint
No restrictions
No restrictions
Unexplained syncope with low risk of recurrence*
Allowed to drive 1 month after the event
Allowed to drive 3 months after the event
Unexplained syncope with high risk of recurrence** and cause identified and treated
Allowed to drive 1 month after the event
Allowed to drive 3 months after the event
Unexplained syncope with high risk of recurrence** and cause not identified
Licence is refused or revoked for 6 months
Licence is refused or revoked for 12 months
*Absent clinical evidence of structural heart disease and normal ECG** Abnormal ECG, clinical evidence of structural heart disease, syncope causing injury, recurrent syncope Syncope unitsSyncope units aim to evaluate syncope (and related conditions) in dedicated units consisting of generalists and specialists with an interest in syncope. A sufficient number of patients are required to justify such a unit. They are well equipped with facilities for recording ECG, blood pressures, tilt table, autonomic function testing, ambulatory blood pressure monitoring, and invasive and non-invasive electrocardiographic monitoring. It has been shown to be cost effective and reduces health care delivery costs by reducing admission rates, readmission rates and event rates. Examples include the Newcastle model, Manchester model and the Italian model.12,18,37,38Conclusions The incidence of syncope is increasing in the UK with an aging population. There is significant cost incurred in the delivery of health care for this condition. The approach to syncope varies widely amongst practising physicians due to lack of a methodical approach. A thorough initial evaluation yields a diagnosis in less than half of the patients. When the cause of syncope remains unexplained after initial evaluation, the patients should be risk stratified. While a patient with a single episode of low risk syncope can be reassured and discharged, those with high-risk features should be hospitalised for further management. Outpatient evaluation could be offered for low risk patients if recurrent. Early specialist input should be sought in high-risk syncope and recurrent unexplained syncope. This standardised approach or pathway will reduce cost by reducing hospitalisation, inappropriate investigations and length of stay.
Key Facts
Collapse associated with transient loss of consciousness is called syncope if it is due to transient global cerebral hypoperfusion and characterized by rapid onset, short duration, and spontaneous complete recovery
Standardised syncope pathways improve diagnostic yield and reduce hospital admissions, resource consumption and over all costs
A thorough initial evaluation yields a diagnosis in less than half of patients. If the cause of syncope is undetermined after initial evaluation, patients should be risk stratified
Early discharge should be considered in low risk patients while high-risk patients need urgent evaluation.
Early specialist referral is recommended in patients with high risk syncope and recurrent unexplained syncope
Future Interests Syncope had been known for several decades and still remains a complex condition, as the exact mechanisms are poorly understood especially in non-cardiac syncope. Mechanism of syncope in the elderly patients may be different from those of young patients and studies should focus in understanding the mechanics. Further research is needed in risk stratifying syncope. It may enable us to develop more robust care pathways for management of syncope. The role of BNP in investigating and risk stratifying syncope need to be further clarified. In spite of sophisticated tests the cause of syncope in a proportion of patients remain uncertain. Studies should focus on the long-term outcome and management of syncope in this group. The role of implantable loop recorder in the investigation of syncope should be better defined and more studies should focus on when it should be offered in the pathway of management of syncope. Studies are also required to develop effective pharmacotherapies for this condition.
Patients, in both the pre and post-operative periods, seek and receive advice from a number of health professionals. The advent and subsequent increasing use of day case surgery has also meant that patients have a reduced exposure to the surgical staff. This subsequently results in patients increasingly seeking post-operative advice from their general practitioner and allied health care professionals. The development of innovative surgical techniques has meant that the traditional teachings with regard to time taken for convalescence following surgery are somewhat outdated. The aim of this study was to initially determine the exact time taken for patients to return to work, driving and daily routine for a number of routine general surgical procedures. Secondly we aimed to determine the advice that GPs and surgeons would give to patients following routine surgery.
Patients and Methods
Patients aged 65 years or less, who had routine surgical procedures (open unilateral inguinal hernia repair, laparoscopic cholecystectomy, laparoscopic hernia repair and unilateral varicose vein surgery) over a six month period (January – June 2004) were identified from the theatre database. A single page questionnaire was sent to each patient (Appendix 1). Each patient was questioned with regard to the following:
Occupation
Time taken to return to normal activities following surgery
Time taken to return to driving following surgery and any advice given
Expected and actual time off from work following surgery
Distribution and length of a sick note
Expectations following surgery
Experience of day case surgery
Questionnaires were returned and data collected on a specially constructed database. Concurrent to this a further questionnaire (Appendix 2) was distributed to a number of differing groups of health professionals. These were namely:
GPs – this included the GPs of all patients who had been identified as having undergone surgery in the specified six month period as well as all doctors on the vocational training scheme.
Surgeons – this included all senior house officers on the Yorkshire School of Surgery Basic Surgical Training Scheme and all Higher Surgical Trainees (General Surgery) within the Yorkshire Deanery including non-carrier grade doctors .
Replies were anonymous and each health care professional was asked with regards to the advice they would give to an “average” patient undergoing the four procedures with regard to time it would take to return to work (office or heavy), driving and return to normal activities. They were also asked whether they felt the procedure was suitable for day case surgery. Statistical Analysis Statistical analysis was undertaken using the Analyse-it statistical package (Leeds, UK.). Non-parametric analysis using either Kruskall 1- way ANOVA or the Mann-Whitney U test was used to test for a difference between the medians of independent samples. The Wilcoxon signed-ranks test was used to test for a difference between the medians of 2 related samples. Significance was determined as a p-value < 0.05. Results Nineteen of 48 patients who underwent varicose vein surgery (39%), 44 of 72 patients who underwent a laparoscopic cholecystectomy (61%), 23 of 35 patients who underwent a laparoscopic hernia repair (65%) and 12 of 23 patients who underwent an open inguinal hernia repair (52%) over the six month period returned a completed questionnaire. Of the health care professionals, 65 primary care physicians were identified and sent questionnaire, of which fifty three GPs (81.5%) replied. From the Yorkshire deaneries database sixty five trainees were identified (Spr, SHO, HO, non-carrier grades), of which 41(63.2%) surgically trained doctors returned a completed questionnaire. Among the responders, we also include four consultant surgeons who have performed the operations on patients in our hospital. Overall one hundred and thirty participants were sent study forms, of which 94 (72.3%) health professionals responded with completed questionnaire. Varicose Vein Surgery (Table 1)
Activity
Time (IQR in Weeks)
Overall (K)
Surgeons vs. GPs (M)
Surgeons vs. Patients (M)
GPs vs. Patients (M)
Office Work
Surgeons
2 (1-2)
GPs
2 (1-2)
0.13
0.56
0.10
0.05
Patients
1 (1-2)
Heavy Work
Surgeons
3 (2-5)
GPs
4 (2-4)
<0.01
0.75
<0.01
<0.01
Patients
1 (1-1.75)
Driving
Surgeons
2 (1-2)
GPs
2 (1-2)
<0.01
0.24
<0.01
0.02
Patients
1 (1-1)
Normal Activities
Surgeons
2 (2-4)
GPs
2 (2-4)
0.05
0.57
0.04
0.02
Patients
1.5 (1-2)
Table 1: Time taken to return to work, driving and daily activities as experienced by patients and as suggested by both surgically trained doctors and GPs for unilateral varicose vein surgery. Time: Median time to return to activity (IQR - weeks) K: Kruskal Wallis ANOVA. M: Mann Whitney U test. P<0.05 deemed as significant. Of the 19/48 patients who returned a completed questionnaire, eleven (57.8%) were women with an overall median age 44 years (range 21-64 years). Seventeen of the 19 patients worked (89%), 11 of who undertook office work (57.8%). Patients tended to return to driving and normal activities quicker than that recommended by doctors. GPs and surgeons offered similar advice with regard to return to all activities following varicose vein surgery. Nine of the 19 patients were uncertain about whether they have received any advice or perhaps forgotten any information regarding when to return to driving. Five patients received no advice about when to return to work. No significant difference was observed between expected time off work and actual time off work experienced by the patients (2 weeks vs. 1 week – p=0.15 Wilcoxon Rank test). Fifteen of the 19 patients (79%) said that their recovery was what they had expected with the reasons for not meeting expectations being wound infection in 2, bruising and a larger incision in one patient each. Seventeen patients had their surgery performed as a day case (89.4%). Fifteen patients stated that they would have surgery again as a day case (88.2%). Laparoscopic Cholecystectomy (Table 2)
Activity
Time (IQR - weeks)
Overall (K)
Surgeons vs. GPs (M)
Surgeons vs. Patients (M)
GPs vs. Patients (M)
Office Work
Surgeons
2 (1-2)
GPs
2 (2-3)
<0.01
0.02
<0.01
<0.01
Patients
5 (3-7)
Heavy Work
Surgeons
4 (2-4)
GPs
4 (4-6)
<0.01
<0.01
0.26
0.04
Patients
2 (1.5-4)
Driving
Surgeons
2 (1-2)
GPs
2 (1-3)
0.19
0.19
0.10
0.43
Patients
2 (1-4)
Normal Activities
Surgeons
2 (1-4)
GPs
3 (2-4)
0.19
0.20
0.09
0.47
Patients
4 (2-6)
Table 2: Time taken to return to work, driving and daily activities as experienced by patients and as suggested by both surgically trained doctors and GPs for laparoscopic cholecystectomy. Time: Median time to return to activity (IQR - weeks) K: Kruskal Wallis ANOVA. M: Mann Whitney U test. Of the 44/72 patients who returned a completed questionnaire 39 were women (88.6%) with an overall median age 47 years (range 20-63 years). Thirty-two of the 44 patients worked (72%), 25 of who undertook office work (56%). Patients returned to office work significantly later than that recommended by both groups of doctors. Overall, patients took a significantly shorter time to return to work that involved lifting heavy objects. Surgeons also recommended shorter times to return to work when compared with GPs. Of further interest is the observation that it took a shorter time for those patients undertaking heavy work to return to work when compared with the patients undertaking office work. There was no significant difference in the time taken to return to driving and normal activities experienced by the patients when compared to the advice given by both groups of doctors. Ten of the 44 patients (22%) stated that they had received no advice regarding when to return to driving or perhaps they may have no memory about driving instructions. Seven patients stated they received no advice about when to return to work (15%). Overall, patients expected a significantly shorter time off work than was actually experienced (2.5 weeks vs. 4 weeks – p<0.01 Wilcoxon Rank test). Twenty-one of the 44 patients (48%) said that their recovery was not what they had expected (47%). Of these 21 patients, 6 said that their recovery was better than expected (28%), 5 said that their recovery was longer than expected (23%), and the rest either complained of pain or wound infection. Seventeen patients had their surgery performed as a day case (38%). Of these 17, 11 said that they would have surgery again as a day case (64%). A significantly higher proportion of GPs felt that this procedure was suitable for day case surgery compared with the proportion of patients who actually underwent the procedure as a day case (p=0.02 chi squared test). Laparoscopic Inguinal Hernia Rapair (Table 3)
Activity
Time (IQR - weeks)
Overall (K)
Surgeons vs. GPs (M)
Surgeons vs. Patients (M)
GPs vs. Patients (M)
Office Work
Surgeons
2(1-2)
GPs
2 (1-2)
0.73
0.56
0.48
0.714
Patients
2 (1-2.75)
Heavy Work
Surgeons
6 (4-6)
GPs
4 (4-6)
0.03
0.31
0.01
0.03
Patients
3 (2-4)
Driving
Surgeons
2 (1-4)
GPs
2 (1-2)
0.22
0.21
0.12
0.46
Patients
1 (1-2.25)
Normal Activities
Surgeons
2 (2-4)
GPs
3 (2-4)
0.41
0.87
0.31
0.17
Patients
2.5 (1.25-3)
Table 3: Time taken to return to work, driving and daily activities as experienced by patients and as suggested by both surgically trained doctors and GPs for laparoscopic hernia repair. Time: Median time to return to activity (IQR - weeks) K: Kruskal Wallis ANOVA. M: Mann Whitney U test. Of the 23/35 patients who returned a completed questionnaire, the majority had bilateral hernias repaired. 22 were men (95%) with an overall median age 48 years (range 35-63 years). Twenty one of the 23 patients worked (91%), 10 of who undertook office work (43%). No significant difference was found between the actual time taken to return to office work and the advice given by either group of doctors. Patients returned to heavy work significantly sooner than that recommended by both groups of doctors. There was no significant difference in the time taken to return to driving and normal activities experienced by the patients when compared to the advice given by both groups of doctors. Three (13%) patients were uncertain about receiving advice regarding when to return to driving or they might have no memory of information received. Six (26%) patients stated they cannot recall about receiving any advice regarding when to return to work. There was no significant difference seen in the time patients expected off work than was actually experienced (2 weeks vs. 2 weeks – p>0.05 Wilcoxon Rank test). Nine of the 23 patients (39%) said that their recovery was not what they had expected. Of these 9 patients, 2 (22%) said that their recovery was longer than expected, 4 (44%) said that they experienced more pain than they expected; one (11%) said that the recovery time was much shorter and one (11%) experienced some bleeding from the umbilical port. Twenty patients (86%) underwent their surgery as a day-case. Of these 20, 16 (69%) said that they would have their surgery again as a day case. Open Inguinal Hernia Repair (Table 4)
Activity
Time (IQR - weeks)
Overall (K)
Surgeons vs. GPs (M)
Surgeons vs. Patients (M)
GPs vs. Patients (M)
Office Work
Surgeons
2 (2-2)
GPs
2 (1.25-3)
0.01
0.07
<0.01
0.05
Patients
4 (3-4)
Heavy Work
Surgeons
6 (4-6)
GPs
6 (4-7.75)
0.57
0.49
0.47
0.39
Patients
5 (4.25-5.75)
Driving
Surgeons
3 (2-4)
GPs
2 (2-3)
0.03
0.06
0.02
0.15
Patients
2 (1-2)
Normal Activities
Surgeons
2 (2-2)
GPs
2 (1.25-3)
<0.01
0.07
<0.01
0.01
Patients
4 (2.5-5)
Table 4: Time taken to return to work, driving and daily activities as experienced by patients and as suggested by both surgically trained doctors and GPs for open hernia repair. Time: Median time to return to activity (IQR - weeks) K: Kruskal Wallis ANOVA. M: Mann Whitney U test. P<0.05 deemed as significant. All 12/23 patients who returned a completed questionnaire were men with an overall median age 54 years (range 42-65 years). Nine of the 12 patients worked (75%), 5 of whom undertook office work (41%). Patients took a significantly longer time to return to office work when compared to the advice given by either group of doctors. No significant difference was observed in the time taken for patients to return to manual work and the advice given by either group of doctors. Surgeons advised a longer period of abstinence from driving compared to that actually undertaken by the patients. Patients took a significantly longer time to return to normal activities when compared to the advice given by either group of doctors. Two patients (16%) replied that no information was given or may not recall in regards to when to return to driving and one patient (8.3%) stated that he cannot recall any professional advice he has received about return to work. There was no significant difference seen in the time patients expected off work than was actually experienced (3 weeks vs. 5 weeks – p>0.05 Wilcoxon Rank test). Five patients (41%) said that their recovery was not what they had expected. Of these 5 patients, 4 (80%) said that they experienced more pain than they expected and one (20%) experienced more bruising. Seven patients (58%) underwent their surgery as a day-case and of these, 5 (71%) said that they would have their surgery again as a day case. Discussion With the advent of day case surgery there is an increasing number of health professionals giving advice to patients about their post-operative course. Advocates of minimal access surgical techniquesand day case surgery claim that this is associated with a reduction in the periodof postoperative recovery1, 2. The proposed benefits,however, may never be seen if there is no concordance in the advice given by medical practitioners. The advice given to patients is still based upon personal experience rather than firm scientific evidence and indeed, there have been few studies that have analysed patients return to normal activities following surgery. Majeed et al questioned 59 general practitioners and61 surgeons with regard to the time taken for young (25 years old) and older (55 years old) patients to return to sedentary, light manual and heavy manual work following a number of common surgical procedures (including varicose vein surgery, unilateral open inguinal hernia repair and laparoscopic cholecystectomy) 3. The moststriking finding was the enormous variation in opinion betweendifferent doctors. For example, a 55 year old heavy manual workerhaving a haemorrhoidectomy could be given between one and 16 weeks off work depending on which doctor he or she consulted. Such wide variation was not observed in our study and in general, the advice given by both GPs and surgeons was similar apart from the fact that surgeons advised a shorter period off office work for patients undergoing laparoscopic cholecystectomy. The end of the twentieth century has brought an exponential growth in new surgical techniques for standard general surgical procedures. Not only there has been an increase in the use of mesh for open inguinal hernia repairs but there has also been an increasing use of laparoscopic hernia repair, with the recent guidance by the national institute for health and clinical excellence (NICE) liable to further increase the role of laparoscopic repair 4. Furthermore, there has been the widespread acceptance of laparoscopic cholecystectomy and an increased awareness of the role of general anaesthetic in increasing the number of procedures that can be undertaken as a day case. Given these continuing developments in surgical technique as well as in both pre- and post-operative care the present advice and experience of GPs could be seen to be somewhat out-dated. Two surgeons within the unit perform laparoscopic hernia repair (one the transabdominal preperitoneal repair (TAPP) and one the totally extraperitoneal (TEP) repair) with three performing solely the open technique. Although our results based on small sample size but match with evidence based recommendation by NICE (www.nice.org.uk,), suggests that laparoscopic repair does reduce the time taken for post operative recovery when compared to open repair. In fact, all patients returning to heavy work following laparoscopic hernia repair do so quicker than that advised by either GP's or surgeons although unlike the surgeons, GPs do tend to recognise the likely reduction in pain experienced following a laparoscopic repair and alter the advice given to those in heavy work accordingly. Restriction of activity on the advice of surgeons may be basedon their concern for tissue healing and strength, which may have arisen in the days when absorbablesutures such as catgut were used. The use of mesh should now change this thinking and it has indeed been shown that there is no increase in the recurrence of inguinal herniasafter early return to work 5. Office workers undergoing an open inguinal hernia repair take a longer time to return to work (4 weeks) than that advised by both groups of doctors. Furthermore, patients undergoing laparoscopic cholecystectomy take a shorter time to return to heavy work than office work. These results do require more evaluation. At face value it would appear that doctors underestimate the time taken for return to office work and in the case of the cholecystectomy overestimate the time it takes to return to heavy work. In fact the patients in office work took a significantly longer time to return to work following cholecystectomy than those in heavy work. Although only 20% of the working cohort of patients who underwent cholecystectomy were in “heavy work” this result probably represents the fact that a high proportion of people in heavy work are self-employed and time off work is money lost. Patients who are selfemployed return to work much sooner than those in salaried jobs 6. Furthermore, there may well be an element of low job satisfaction in people in office work, which has also been shown to be a major predictor of delayed return to work 7. The time taken to return to work, however, may be dependant on the patients' expectation of convalescencetime formed prior to surgery, which in many cases is based upon advice given by medical practitioners. Furthermore, the attitude of the medical profession in the post-operative period is important as they have to issue the certification necessaryto ensure financial compensation for the patient. Patients undergoing varicose vein surgery returned to heavy work, driving and normal activities significantly sooner than that suggested by either group of doctors. This may well be down to a recently concerted effort to encourage patients to walk to reduce the risk of DVT. All patients had long saphenous vein (LSV) surgery by either the standard high tie, stripping of the LSV and multiple stab avulsions or by local ligation of the LSV. Overall it would appear that a one-week period of recuperation is all that is needed following unilateral varicose vein surgery. The advent of minimally invasive treatment for varicose veins may result in a shorter post-operative recovery period 8. There are some shortcomings associated with this study. Questionnaire based studies always present methodological issues including problems with response rate. There is never an “average patient” and normal activities for one patient may be completely different from those of another patients and any advice given should be individually tailored. Furthermore, occupations were not classified as either manual or office based prior to the start of this study, but were classified on an individual basis during collation of the data. However, we hope that the data presented here will help medical practitioners advising their patients about postoperative routine life activities. Conclusion: We believe that our overall practice is not different with regards to the pre, peri and post-operative management of patients when compared to the majority of units within the UK. However, there may well be some variation with regard to healing and time taken to return to work and we would encourage other units to undertake similar studies to determine convalescence times.
Appendix 1
Sex (Male / Female)
Age at time of surgery.
Do you work
Yes / No
If yes, what job do you do?
How long did it take to you to return to your normal activities of daily living following your operation (weeks).
If you drive, how long did it take for you to start driving again (weeks).
What advice, if any, were you given about driving after your operation?
The following questions are to be completed if you do work.
Prior to your surgery, did you receive any information about how long you would be off work?
Yes / No
If YES, what information was given to you?
How long did you expect to be off work following your surgery (weeks)?
How long were you actually off work following your surgery (weeks)?
If you are in employment:
Did you get a sick-note:
· From the hospital
Yes / No
· From the GP
Yes / No
How long was the sick note for (weeks)?
Did the sick note need to be extended?
Yes / No
Was the recovery after your operation as you had expected it to be?
Yes / No
If no, why not?
Did you go home on the same day as you had your operation?
Yes / No
If YES, would you do the same if you had the operation again or would you prefer to stay overnight after your operation?
If you would prefer to stay overnight, why?
Appendix 2 Dear Doctor. We at xxxxxx Hospital are undertaking a study to determine whether the information given to patients following routine general surgical procedures is consistent and compares to the actual recovery period experienced by the patients themselves. We would be grateful if you would consider the four general surgical procedures below and give us an average length of time (in weeks) that you would advise the patient to abstain from:(a) office work(b) heavy work(c) driving(d) to return to normal activities of daily living The general surgical procedure to be considered are1) mesh repair of an inguinal hernia (unilateral)2) laparoscopic hernia repair 3) unilateral varicose vein surgery4) laparoscopic cholecystectomy
Introduction A hydatid cyst is the larval stage of a small tapeworm, Echinococcus granulosus. This is an emerging zoonotic parasitic disease throughout the world, thought to cause an annual loss of US $193,529,740.1 Hydatid cysts are more prevalent in Australia, New Zealand, South America, Russia, France, China, India, the Middle East and Mediterranean countries.2,3,4 They are most commonly (about 50-75%) seen in children and young adults.4,5,6 The liver is the most common organ involved (77%), followed by the lungs (43%).7,8,9,10 However, some researchers report that the lung is the most common organ involved in children, possibly due to bypass of the liver by lymphatics, and higher incidental findings in the lungs when children are assessed for other respiratory infections.8,11,12,13 Hydatid cysts have been reported in the brain (2%),3,4,5,7,8,14,15 heart (2%),8,10,13,16 kidneys (2%),9,10,11 orbit (1%),17,18 spinal cord (1%),3,19 spleen,4 spine,3,8 spermatic cord20 and soft tissues.8 However, in the Mediterranean region, the incidence of brain hydatid cysts have been reported higher (7.4-8.8%).21 Surgery remains the treatment of choice, although recently some new modalities have been described.5,8,22 Careful removal of the lesion is of considerable importance, otherwise fatal complications are inevitable.23,24,25 We describe the case of a 6 year old boy who came to our department with various neurological manifestations. The main purpose of this study is to demonstrate the unusual symptoms of the patient and the enormity of the operated cyst, which was fully resected without rupture. Case Report A 6-year-old boy was referred to our Neurosurgery Department with a four week history of ataxia and left sided weakness. His vital signs were normal and his Glasgow Coma Scale (GCS) was 15. The symptoms had started about six months ago with numbness and parasthesia of the toes. Subsequently he developed intermittent nausea and vomiting. He then started to develop left sided weakness and finally ataxia. He also had a few focal convulsions but did not complain of headache. Fundoscopy revealed bilateral frank papilloedema. On examination, the patient had nystagmus and a positive Romberg’s test. Laboratory data showed mild leucocytosis without any significant rise in eosinophils, and liver enzymes were normal. The enzyme-linked immunosorbent assay (ELISA) for hydatid cysts was negative. Plain chest X-ray and ultrasound scan of the abdomen and pelvis were also normal. Brain computed tomography (CT scan) of the frontal and parietal lobes demonstrated a single large, spherical, well-defined, thin-walled homogenous cyst, with an inner density similar to that of cerebrospinal fluid (CSF), and a wall which did not show enhancement [fig 1(a)]. This cystic structure caused a mass effect and a midline shift towards the left, as well as hydrocephalus, possibly due to obstruction. Magnetic resonance imaging (MRI) of the brain showed cystic signal intensity similar to that of CSF, without ring enhancement or oedema [fig 2]. Fig 1 (a): Pre-operative unenhanced CT scan which shows a large CSF density cystic lesion on the right side causing mass effect and midline shift to the left. There is no peri-lesional oedema. Fig 1 (b): Post-operative CT scan of the lesion shows a large void which can lead to dangerous collapse. Mild haematoma is also seen. Fig 2 (a): T1-weighted axial MRI of the brain demonstrates a cyst density similar to CSF. Fig 2 (b): T2-weighted MRI shows no ring enhancement or oedema. The periventricular hyperintensity of the left side is probably due to obstructive hydrocephalus. Fig 3: This shows the cyst removed in toto after operation. The cyst appears creamy and smooth. After summation of all the above data, the diagnosis of a hydatid cyst was made and a right frontotemporoparietal craniotomy was performed. A large cystic structure (14×14×12 cm) was delivered with utmost care to avoid rupture and spillage [fig 3]. A hydatid cyst was confirmed by pathology reports. A post-operative CT scan showed a large space without any residual matter [fig 1(b)]. Post-operatively, albendazole 15 mg/kg was started and continued for four weeks. The patient showed marked improvement in his neurological deficit and was discharged after one week with close follow-up.Discussion/Review Of LiteratureLife CycleHydatidosis is caused by Echinococcus granulosus, which occurs mainly in dogs. Humans who act as intermediate hosts get infected incidentally by ingesting eggs from the faeces of the infected animal. The eggs hatch inside the intestines and penetrate the walls, entering blood vessels and eventually reach the liver where they may form cysts or move on towards the lungs. Even after pulmonary filter, a few still make it to the systemic circulation and can lodge in almost any part of the body, including the brain, heart and bones.2,3,8,14,16,26 Brain hydatid cysts are relatively rare and only account for up to 2% of total cases.4,5,7 The actual percentage may be higher than what we have in literature, due to under-reporting. Brain hydatid cysts can be primary (single) or secondary (multiple).2,3,4,5,7 The latter are thought to arise from the multiple scolices released from the left side of the heart following cyst rupture in the heart2,3,5,27 or due to spontaneous, traumatic or surgical rupture of a solitary cranial cyst.3,5 Cysts mostly involve the territory of the middle cerebral artery4,7 but other regions like intraventricular, posterior fossa and the orbit have also been reported.15,17,18,28 The wall of the cyst consists of an inner endocyst (germinal layer) and outer ectocyst (laminated layer). The host reacts to the cyst forming a pericyst (fibrous capsule), which provides nutrients to the parasite. In the brain, due to minimal reaction, the pericyst is very thin. The endocyst produce scolices which bud into the cyst cavity and may sediment within the hydatid cavity, commonly known as hydatid sand.3,14,29,30Presentation and DiagnosisMost hydatid cysts are acquired in childhood and are manifested during early adulthood.8,29 Cysts develop insidiously, usually being asymptomatic initially, and present with protean clinical and imaging features.3,5,6 In previous studies the most common presenting symptoms were headache and vomiting.4,5,7,14,15,28 Also in the literature, patients reported ataxia, diplopia, hemiparesis, abducens nerve palsy and even coma.5,7,15,28 Surprisingly, in the present study the patient did not have a headache and presented with parasthesia and numbness of the toes. Later he developed left sided weakness, convulsions and finally ataxia, which correlate with previous studies. Diagnosis of a hydatid cyst can sometimes be confused with other space occupying lesions of the brain, especially abscesses, neoplasms and arachnoid cysts.14,31 In this study the patient had bilateral frank papilloedema which is also mentioned in earlier reports.4,28 The Casoni and Weinberg tests, indirect haemagglutination, eosinophilia and ELISA are used in diagnosing hydatid cysts, but as brain tissue evokes minimal response many results tend to be false negatives.2,5,8,25 In our case also, serology for hydatid cyst was negative. CT scan and MRI are used frequently in diagnosing the cystic lesions.3,8,14,23,32,33 However, MRI is considered superior in demonstrating the cyst rim.5,8,11,21,32,34 On CT scan, a solitary cyst appears as well-defined, spherical, smooth, thin-walled and homogeneous, with an inner density similar to CSF, and non-enhancing walls.11,29,32The wall may appear iso-dense to hyper-dense on CT scan3,8, and rarely, may become calcified.11,29,32 There is usually no surrounding brain parenchymal oedema, which if exists along with ring enhancement, indicates inflammation and infection. 7,11,32,33,34,35 Ring enhancement and peri-lesional oedema differentiates brain abscesses and cystic neoplasms from uncomplicated hydatid cysts.3,8 These findings can in fact sometimes cause dilemma and misdiagnosis and lead to catastrophic events.14 The cyst shows low signal intensity on T1-weighted, and high signal intensity on T2-weighted MRI.2 MRI may also show peri-lesional oedema not seen on regular CT scan imaging.7 MRI may prove superior in determining exact cyst location, presence of super-added infections and cystic contents, and also in surgical planning and ruling out other diagnostic possibilities.14,33 We strongly recommend MRI for better evaluation of cystic brain lesions. Spontaneous cystic rupture can lead to different appearances depending on which layers have been obliterated, and produce some specific signs.3 When only the endocyst ruptures, cyst contents are held by the outer pericyst giving a peculiar water lily sign, which is pathognomic.3,8TreatmentThough still in infancy, medical therapy for small or inoperable brain hydatid cysts has been promising. Albendazole alone or in combination with other compounds, such as praziquantel, has been reported with favourable results as an adjunct and, in certain circumstances, as the primary mode of treatment.2,36,37,38 It is reported that albendazole results in the disappearance of up to 48% of cysts and a substantial reduction in size of the cysts in another 28%.2 The duration of the treatment is four weeks or more, and recently many authors have favoured a prolonged therapy. The change in levels of cyst markers such as alanine, succinate, acetate and lactate, measured before and during treatment on Proton Magnetic Resonance Spectroscopy (MRS), correlate well with shrinkage and resolution of cyst findings on conventional MRI and help in evaluating the efficacy of chemotherapy.39 Cysts may drain into ventricles or rupture completely, causing spillage of contents into the subarachnoid space, leading to fatal anaphylactic shock, meningitis or local recurrence.3,5,22,25 Surgery is the mainstay for treating intracranial hydatid cysts and the aim is to excise the cysts entirely without rupture, which can otherwise lead to catastrophic events as described earlier 2,3,14,25. The Dowling-Orlando technique remains the preferred method, in which the cyst can be delivered by lowering the head of the operating table and instilling warm saline between the cyst and the surrounding brain.40 Even minimal spillage can cause deleterious effects (1 ml of hydatid sand contains 400,000 scolices).14 The thin cyst wall, periventricular location and micro-adhesions to the parenchyma are the main problems encountered during the surgical procedure.1,22 The large cavity remaining after the cystic removal can lead to many serious complications, such as cortical collapse, hyperpyrexia, brain oedema and cardio-respiratory failure.5 Recurrence remains a major concern, which is managed by both antihelminthic chemotherapy and surgery. In a study conducted by Ciurea et al, 25% of the patients had recurrence, which highlights the need for long term follow up.23 In the present study, due to the huge size of the cyst and progressive neurological deficit, it was not wise to completely rely on medical therapy. Surgery was performed and post-operatively albendazole was started as an adjunct. We recommend that for treating brain hydatid cyst, the size of the cyst, multiplicity, location and neurological deficit must all be taken into consideration.
Cervicogenic headache (CH) refers to head pain originating from the pathology in the neck.1 However, the diagnosis of CH is still controversial 2,3 and it is often misdiagnosed. The author was called to consult a patient in a university hospital not so long ago. The patient was a 28-year-old female with a history of headache for six months. Her headache was described as continuous, dull and achy. It was mainly in the right side occipital and parietal areas. Sometimes she felt a headache behind the eyes. Her headache got worse periodically, several times a month, with nausea, photophobia, and phonophobia. She had no previous history of headache until a whiplash injury six months before. She had been diagnosed as having ‘migraine’ and ‘post-traumatic headache.’ She had used all anti-migraine medications. ‘Nothing was working.’ The patient was admitted into hospital because of ‘intractable headache.’
On the day when the author saw the patient, she was lying on the bed, with the room light turned off and a bed sheet covering her head and eyes. She was given Dilaudid, 2mg/h continuous intravenous (IV) drip, for the headache. The patient had normal results from magnetic resonance imaging (MRI) of the brain and lumbar puncture. According to the patient, no doctors had touched the back of her head and upper neck since admission. The author examined the patient and found a jumping tenderness over the right greater occipital nerve. The patient was given 2ml of 2% lidocaine with 40mg of Kenalog for the right greater occipital nerve (GON) block. Her headache was gone within five minutes and the Dilaudid drip was immediately discontinued. At follow-up four weeks later, the patient was headache-free. This was a typical missed case of CH (occipital neuralgia).
The concept of CH was first introduced by Sjaastad and colleagues in 1983.4 The International Headache Society published its first diagnostic criteria in 1998 which was revised in 2004.5 Patients with CH may have histories of head and neck trauma. Pain is often unilateral. Headache is frequently localized in the occipital area. However, pain may also be referred to the frontal, temporal or orbital regions. Headaches may be triggered by neck movement or sustained neck postures.6 Headache is constant with episodic throbbing attacks, like a migraine. Patients may have other symptoms mimicking a migraine such as nausea, vomiting, photophobia, phonophobia, and blurred vision. Due to the fact that there is a significant overlap of symptoms between CH and migraine without aura, CH is often misdiagnosed as migraine. CH is commonly found in patients after whiplash injuries, especially in the chronic phase.7
Anatomical studies have provided a basis for the pathogenesis of CH. The suboccipital nerve (dorsal ramus of C1) innervates the atlanto-occipital (AO)joint and dura matter over in the posterior fossa. Therefore, a pathologic condition of AO joint is a potential source for occipital headache. It has been reported that pain from the C2-3 and C3-4 cervical facet joints can radiate to the occipital area, frontotemporal and even periorbital regions. Even pathology in C5 or C6 nerve roots have been reported to cause headache.8 The trigeminocervical nucleus is a region of the upper cervical spinal cord where sensory nerve fibres in the descending tract of the trigeminal nerve (trigeminal nucleus caudalis) are believed to interact with sensory fibres from the upper cervical roots. This functional convergence of upper cervical and trigeminal sensory pathway sallows the bidirectional referral of painful sensations between the neck and trigeminal sensory receptive fields of the face and head.
Clinicians should always put CH in the list of differential diagnoses when they work up a headache patient. A history of head/neck injury, and detailed examination of the occipital and upper cervical area, should be part of the evaluation. Patients with CH may have tenderness over the greater or lesser occipital nerve, cervical facet joints and muscles in the upper or middle cervical region. Diagnostic imaging such as X-ray, computerized tomography (CT) and MRI cannot confirm CH, but can lend support to its diagnosis.
Treatment of CH is empirical. This headache does not respond well to migraine medications. Treatment should be focused on the removal of the pain source from the occipital-cervical junction. Initial therapy should be directed to non-steroidal anti-inflammatory drugs (NSAIDs) and physical therapy modalities.9 GON block is easy and safe to perform in office.10 It is effective for the treatment for occipital neuralgia and CH.11 The author followed a group of patients after GON block. The pain relief effects of GON block lasted an average of 31 days (unpublished data). If patients do not respond to GON block, diagnostic medial branch block and radiofrequency (RF) denervation of the upper cervical facet joints can be considered. Early studies have reported positive results.12 A subsequent randomized study found no benefit of RF. However, there were only six cases in each group,13 which significantly limited the power and validity of the conclusion from that study. Surgical treatment of cervical degenerative disc disease may offer effective pain relief for CH. Jansen14 reported 60 cases of CH patients treated mainly with C4/5, C5/6 and C6/7 nerve root decompression. More than 63% patients reported long lasting pain freedom or improvement (> 50%).
CH is common, with a prevalence of 0.4% and 2.5% in the general population. However, compared with other common pain conditions, CH is less studied. A Medline search found 6818 abstracts for migraine in 2009, whereas only 86 abstracts on CH were found. CH has not been well studied and it is often misdiagnosed. It is time to call for more attention.
Introduction: Most experts define infertility as not being able to get pregnant after at least one year of trying. Women who are able to get pregnant but then have recurrent miscarriages are also said to be infertile. The infertility definition made a difference. The World Health Organization definition based on 24 months of trying to get pregnant is recommended as the definition that is useful in clinical practice and research among different disciplines.1Magnitude of the Problem: It is a growing problem and across virtually all cultures and societies almost all over the World and affects an estimated 10%-15% of couples of reproductive age. In recent years, the number of couples seeking treatment for infertility has dramatically increased due to factors such as postponement of childbearing in women, development of newer and more successful techniques for infertility treatment, and increasing awareness of available services. This increasing participation in fertility treatment has raised awareness and inspired investigation into the psychological ramifications of infertility. Consideration has been given to the association between psychiatric illness and infertility. Researchers have also looked into the psychological impact of infertility per se and of the prolonged exposure to intrusive infertility treatments on mood and well-being. There is less information about effective psychiatric treatments for this population; however, there is some data to support the use of psychotherapeutic interventions2. Why infertility has a psychological effect on the couple? Parenthood is one of the major transitions in adult life for both men and women. The stress of the non-fulfilment of a wish for a child has been associated with emotional squeal such as anger, depression, anxiety, marital problems and feelings of worthlessness. Partners may become more anxious to conceive, ironically increasing sexual dysfunction and social isolation. Marital discord often develops in infertile couples, especially when they are under pressure to make medical decisions. Couples experience stigma, sense of loss, and diminished self-esteem in the setting of their infertility3. Male and female partner respond differently: In general, in infertile couples women show higher levels of distress than their male partners4; however, men’s responses to infertility closely approximate the intensity of women’s responses when infertility is attributed to a male factor3. Both men and women experience a sense of loss of identity and have pronounced feelings of defectiveness and incompetence. Women trying to conceive often have clinical depression rates similar to women who have heart disease or cancer. Even couples undertaking IVF face considerable stress. Emotional stress and marital difficulties are greater in couples where the infertility lies with the man. Therefore the psychological impact of infertility can be devastating to the infertile person and to their partner. Factors influencing psychological stress: According to one study done in Sweden, three separate factors seem to contribute to the psychological stress men and women experience as a result of their infertility. The three factors, in order of importance for the women were,1. "Having Children is a Major Focus of Life"2. "The Female Role and Social Pressure"3. "Effect on Sexual Life"The men in the study reversed the order of importance of factors 1 and 2. The third factor was equally significant to both the men and women. It was also shown that women experienced their infertility more strongly than the men. Women also showed a more intense desire to have a baby than men.5.Behaviour of the couple as a result of infertility: Stress, depression and anxiety are described as common consequences of infertility. A number of studies have found that the incidence of depression in infertile couples presenting for infertility treatment is significantly higher than in fertile controls, with prevalence estimates of major depression in the range of 15%-54%6,7,8,9. Anxiety has also been shown to be significantly higher in infertile couples when compared to the general population, with 8%-28% of infertile couples reporting clinically significant anxiety9,10. The causal role of psychological disturbances in the development of infertility is still a matter of debate. A study of 58 women from Lapane and colleagues reported a 2-fold increase in risk of infertility among women with a history of depressive symptoms; however, they were unable to control for other factors that may also influence fertility, including cigarette smoking, alcohol use, decreased libido and body mass index11. Psychological factors may also affect the reproductive capacity: Although infertility has an effect on a couple’s mental health, different psychological factors have been shown to affect the reproductive ability of both partners. Proposed mechanisms through which depression could directly affect infertility involve the physiology of the depressed state such as elevated prolactin levels, disruption of the hypothalamic-pituitary-adrenal axis, and thyroid dysfunction. One study of 10 depressed and 13 normal women suggests that depression is associated with abnormal regulation of luteinizing hormone, a hormone that regulates ovulation12. Changes in immune function associated with stress and depression may also adversely affect reproductive function13. Further studies are needed to distinguish the direct effects of depression or anxiety from associated behaviours (e.g., low libido, smoking, alcohol use) that may interfere with reproductive success. Since stress is also associated with similar physiological changes, this raises the possibility that a history of high levels of cumulative stress associated with recurrent depression or anxiety may also be a causative factor. Result of treatment: While many couples presenting for infertility treatment have high levels of psychological distress associated with infertility, the process of assisted reproduction itself is also associated with increased levels of anxiety, depression and stress14. A growing number of research studies have examined the impact of infertility treatment at different stages, with most focusing on the impact of failed IVF trials15. Comparisons between women undergoing repeated IVF cycles and first-time participants have also suggested that ongoing treatment may lead to an increase in depressive symptoms16. The data, however, is still controversial since other studies have found minimal psychological disturbance induced by the infertility treatment process or IVF failure17,18. In light of the discrepancy in results, there has been increasing interest in the factors that contribute to drop out from infertility treatment since this population is often not included or decline to participate in studies. Whereas cost or refusal of physicians to continue treatment have been cited as reasons for discontinuing treatment, recent research suggests that a significant number of drop outs are due to psychological factors19,20,21. The outcome of infertility treatment may also be influenced by psychological factors. A number of studies have examined stress and mood state as predictors of outcome in assisted reproduction. The majority of these studies support the theory that distress is associated with lower pregnancy rates among women pursuing infertility treatment7,16,22,23,24,25. Conclusion: In light of all the data suggesting that psychological symptoms may interfere with fertility, success of infertility treatment and the ability to tolerate ongoing treatment; interest in addressing these issues during infertility treatment has grown. Since psychological factors play an important role in the pathogenesis of infertility, exploration of this is also an important task to manage this devastating problem, which has cultural and social impact.
Both Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are diseases characterized by a high global prevalence, complex clinical course, and limited effectiveness of currently available antiviral therapy. Approximately 2 billion people worldwide have been infected with the HBV and about 350 million live with chronic infection. An estimated 600,000 persons die each year due to the acute or chronic consequences of HBV 1, 2. WHO also estimates that about 200 million people, or 3% of the world's population, are infected with HCV and 3 to 4 million persons are newly infected each year. This results in 170 million chronic carriers globally at risk of developing liver cirrhosis and/or liver cancer 3, 4. Hence, HBV and HCV infections account for a substantial proportion of liver diseases worldwide.
These viruses have some differences, like HBV belongs to the Hepadnaviridae family and HCV belongs to the Flaviviridae family. HBV has a circular, partially double-stranded DNA genome of approximately 3.2 kb, whereas HCV has a single RNA strand genome of approximately 9.6 kb. HBV and HCV show some common biological features. Both HBV and HCV show a large heterogenicity of their viral genomes producing various genotypes. Based on genomic nucleotide sequence divergence of greater than 8%, HBV has been classified into eight genotypes labeled A through H 5,6,7,8. Different isolates of HCV show substantial nucleotide sequence variation distributed throughout the genome. Regions encoding the envelope proteins are the most variable, whereas the 5’ non-coding region (NCR) is the most conserved 9. Because it is the most conserved with minor heterogeneity, several researchers have considered the 5’ NCR the region of choice for virus detection by reverse transcription (RT)-PCR. Sequence analysis performed on isolates from different geographical areas around the world has revealed the presence of different genotypes, genotypes 1 to 6 10. A typing scheme using restriction fragment length polymorphism analysis of the 5’ NCR was able to differentiate the six major genotypes 11. Hence both HBV and HCV genotypes display significant differences in their global distribution and prevalence, making genotyping a useful method for determining the source of HBV and HCV transmission in an infected localized population 12 - 27.
Many studies have been conducted to study the prevalence of HBV and HCV co-infection among HIV-infected individuals and intravenous drug users globally 28 -3 4.There are only a few studies relevant to the epidemiology of these types of infection in the normal healthy population 35,36,37. The objective in this study was to determine the seroprevalence of HBV and HCV, co-infection of both these viruses and their genotypes, among an apparently healthy female population as well as from known HBV patients in Karachi, a major city in the province of Sindh, Pakistan. This study is also aimed at providing the baseline data on HBV/HCV co-infection, in order to gain a better understanding of the public health issues in Pakistan. We evaluated the antigen, antibody and genotypes of both HBV and HCV in 144 otherwise healthy female individuals and 28 diagnosed HBV patients.
Materials and Methods:
Study duration:From March 2002 to October 2006 & April 2009
Study participants: Total 4000blood serum samples were collected from healthy female student volunteers and 28 serum samples (April 2009) from already diagnosed Hepatitis B positive patients, aged 16 to 65 years from two Karachi universities and one Karachi hospital. University samples were obtained through the Department of Microbiology, University of Karachi and the Department of Microbiology, Jinnah University for Women. Hospital samples were obtained through the Pathological Laboratory of Burgor Anklesaria Nursing Home and Hospital.
Ethical Consent: Signed informed consent forms were collected from all volunteers following Institutional Review Board policies of the respective institutes.
Pre study screening:All 4028 volunteers had health checkups by a medical doctor before collection of specimens, they were asked about their history of jaundice, blood transfusion, sexual contacts, and exposure to needles, and if they had undergone any surgical and dental procedures.
Biochemical & Hematological screening:On completion of the medical checkups, volunteers were asked to give 5mL of blood for different haematological [(complete blood picture (CP), haemoglobin percentage (Hb%) and erythrocyte sedimentation rate (ESR)] and 10mL for different biochemical tests [(direct bilirubin, indirect bilirubin, total bilirubin, aspartate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP)]. Serological analysis:Samples were also subjected to serological analysis for hepatitis B surface Antigen (HBsAg), HBs antibodies and HCV antibodiesusing rapid immunochromatography kits (ICT, Australia and Abbott, USA). Confirmatory test for HBsAg was done by using ELISA (IMX, Abbott, USA).
All the above mentioned preliminary tests were conducted at the respective institutes in Karachi. Out of 4000 female volunteer from the two universities, 144 otherwise healthy females tested positive for HBsAg. 2 out of the 144 HBsAg positive females were also found to be positive for anti-HCV antibodies. The other 28 positive HBV patients from Anklesaria Hospital were only tested for HBsAg and all 28 were positive for HBsAg. Hence, a total of 172 HBV positive samples (144 + 28 = 172) including the 39 HCV positive serum samples obtained from Karachi were used for genotypic evaluation at Claflin University, South Carolina, USA. Specific ethnicity was not determined but we assume these study participants represent a collection of different ethnic groups in Pakistan.
DNA/RNA extraction and amplification of 172 HBV positive samples: DNA was extracted for HBV, and RNA was extracted for HCV analysis from 200μL of all 172 positive HBV serum samples using PureLink™ Viral RNA/DNA Mini Kit according to manufacturer’s instructions (Invitrogen, CA). Amplification was carried out using puReTaq Ready –To-Go PCR Beads (Amersham Biosciences, UK).
Determination of HBV and HCV genotypes by nested PCR: The primer sets for first-round PCR and second-round PCR, PCR amplification protocol, and primers for both HBV and HCV genomes and genotyping amplification for all 172 samples followed previously reported methods [45, 46]. First round amplification targeted 1063bp for the HBV genome and 470bp for the HCV genome. These respective PCR products for both HBV and HCV were used as a template for genotyping different HBV genotypes A to F and HCV genotypes from 1 to 6. HBV A through HBV F genotypes and HCV 1 through 6 genotypes for each sample were determined by separating the genotype-specific DNA bands on 2% agarose gels, stained with ethidium bromide. The sizes of PCR products were estimated according to the migration pattern of a 50bp DNA ladder (Promega, WI).
Results:
Before screening for HBV status, all 4000 healthy female volunteers from the Department of Microbiology, University of Karachi, and the Department of Microbiology, Jinnah University for Women were subjected to routine physical checkups for exclusion criteria i.e., either they were apparently unhealthy or malnourished (23 volunteers were excluded). All 4000 serum samples were screened by immunochromatography for the presence of HBsAg, anti HBs antibodies and anti-HCV antibodies. Positive results were confirmed by ELISA. Out of 4000 subjects 144 (3.6%) tested positive for HB surface antigen (HBsAg), 2 (0.05%) were positive for anti-HCV antibodies, and 3856 (96.4%) were negative for HBsAg and 3998 (99.95%) were negative for HCV antibodies by both immunochromatography and ELISA. Out of these 144 individuals who tested positive for HBsAg, 20 (13.8%) were positive for anti-HB surface antibodies and 2 (1.4%) tested positive for anti-HCV antibodies. The rest of the 28 serum samples obtained from already diagnosed HBV positive samples from Anklesaria Hospital were only tested for HBsAg and were all positive for HBsAg.
The haematological parameters: WBC count, RBC count, hematocrit and platelet count of the 172 HBsAg positive individuals were within the normal recommended range of values, while mean Hb% was 9.8±1.6 g/dL. Direct bilirubin (0 to 0.3 mg/dL), indirect bilirubin (0.1 - 1.0 mg/dL), total serum bilirubin (0.3 to 1.9 mg/dL), ALT (0 - 36 U/L), AST (0 - 31 U/L) and alkaline phosphatase (20 - 125 U/L) were also within the normal range for 129 HBsAg positive individuals, except for the raised ALT (>36 U/L) and AST (>31 U/L) levels in 38 participants with a previous history of jaundice who were also positive for HBsAg.
All 172 samples that were positive for HBsAg were confirmed for the presence of different HBV genotypes as well as for different HCV genotypes by PCR to see the co-infection of both these viruses. Genotyping was carried out at the South Carolina Center for Biotechology, Department of Biology, Claflin University, Orangeburg, SC, U.S.A. For HBV: Mix A primers were targeted to amplify genotypes A, B and C, and primers for Mix B were targeted to amplify genotypes D, E and F. For HCV: primers for Mix A were targeted to amplify genotypes 1a, 1b, 1c, 3a, 3c and 4. Primers of Mix B for HCV were targeted to amplify genotypes 2a, 2b, 2c, 3b, 5a, and 6a.
Table 1. Prevalence of both single and co-infection of HBV genotypes among the apparently healthy female student sample and known HBV positive patients from Anklesaria hospital in Karachi.
2 Universities
Samples
Percentage
Total HBV
144
Genotype D
70
48.6%
Genotype A
8
5.5%
Genotype F
7
4.9%
Genotype B
5
3.5%
Genotype E
3
2.1%
Genotype C
2
1.4%
Co-infections of HBV Genotypes
49/144
34%
Genotype B/D
30/144
20.8%
Genotype A/D
11/144
7.6%
Genotype A/D
4/144
2.8%
Genotype B/C
4/144
2.8%
Anklesaria Hospital
Samples
Percentage
Total HBV
28
Genotype D
19
67.9%
Genotype A
3
10.7%
Genotype B
1
3.6%
Genotype C
1
3.6%
Genotype F
1
3.6%
Co-infections of HBV Genotypes
Genotype B/A
3/28
10.7%
Figure 1: Electrophoresis patterns of PCR products from different HBV genotypes as determined by PCR genotyping system. Genotype A: 68bp, genotype B: 281bp, genotype C: 122bp, genotype D: 119bp, genotype E: 167bp and genotype F: 97bp.
Table 1 illustrates the prevalence of both single and co-infection of HBV genotypes from both the universities in Karachi and Anklesaria hospital. Representative 10 samples in Fig. 1 show single and co-infections for HBV.
Besides looking at the HBV genotypic status of these 172 patients by PCR, we also looked at the HCV genotypic status of the positive HBV patients by PCR so as to see if there was existence of co-infection of the two viruses i.e. HBV and HCV in the same individuals as only 2 samples tested positive for anti-HCV antibodies by rapid immunochromatography. Table 2 shows the prevalence of HCV genotypes among the apparently healthy female student population from the 2 universities in Karachi and known HBV individuals samples obtained from Anklesaria hospital. Fig. 2 shows different HCV genotype infection in the 10 representative samples shown in Fig. 1 showing HBV infection with different genotypes.
Table 2. Prevalence of HCV genotypes among the apparently healthy female student sample, and known HBV individuals from Anklesaria hospital in Karachi.
2 Universities
Samples
Percentage
Total HCV/Total HBV
39/144
27.1%
Genotype 3a
26/39
66.6%
Genotype 6a
5/39
12.8%
Genotype 3b
4/39
10.3%
Genotype 5a
4/39
10.3%
Anklesaria Hospital
Samples
Percentage
Total HCV/HBV
4/28
14.3%
Genotype 3a
2/28
7.14%
Genotype 2a
1/28
3.6%
Genotype 5a
1/28
3.6%
Figure 2: The sizes of the genotype-specific bands for HCV amplified by PCR genotyping method are as follows: genotype 2a, 190 bp; genotype 3a, 258 bp; genotype 3b, 232 bp; genotype 5a, 417 bp; and genotype 6a, 300 bp.
To summarize the results it was found that out of 172 HBsAg positive samples from the two universities (144 HBV samples) and Anklesaria Hospital (28 HBV samples), 89 (51.7%) were genotype D, 11 were genotype A (6.4%), 8 were genotype F (4.6%), 6 were genotype B (3.5%), 3 were genotype E (1.7%), and 3 were genotype C (1.7%). Out of 43 positive for HCV by PCR from the two universities (39/144 HBV samples) and Anklesaria Hospital (4/28 HBV samples), 65.1% (28/43) showed infection with 3a, followed by genotypes 5a (5/43 = 11.6%), 6a (5/43 = 11.6%), 3b (4/43 = 9.3%) and 2a (1/43 = 2.3%).
Discussion:
Viral hepatitis due to HBV and HCV has significant morbidity and mortality worldwide. The global prevalence of HCV is 3% 38 and the carrier rate of HBsAg varies from 0.1% to 0.2% in Britain and the USA, to 3% in Greece and southern Italy and up to 15% in Africa and the Asia 39. Pakistan is highly endemic with HBV. Studies are too limited to give a clear picture of the prevalence of HBV at the national level, especially among apparently healthy individuals. Most previous studies targeted different small groups of individuals with some clinical indications, so they do not accurately reflect the overall prevalence in Pakistan40. Our previous study was conducted on a first group of 4000 healthy female students from the two universities i.e., Department of Microbiology, University of Karachi and Department of Microbiology, Jinnah University for Women for the prevalence of HBV. We have reported earlier that genotype D appears to be the dominant genotype prevalent in Karachi, Pakistan’s apparently healthy female population, and genotype B appears to be the next most prevalent genotype 41, 42. In this study we checked the prevalence of both HBV and HCV in a second group of 4000 healthy female students from the same two universities in Karachi mentioned above, as well as the already 28 diagnosed HBV patients from Anklesaria Hospital in Karachi, Pakistan.
Both HBV and HCV are present in the Pakistani population and there are varying reports of disease prevalence. HCV is one of the silent killer infections spreading undetected in Pakistan because there are often no clinical symptoms and, when HCV is diagnosed, considerable damage has already been done to the patient. In Pakistan alone, the prevalence of HBsAg has been reported to be from 0.99% to 10% in different groups of individuals 43 - 52 and 2.2% to 14% for HCV antibodies 53 - 56. A recent study conducted in Pakistan showed that out of 5707 young men tested, 95 (1.70%) were positive for anti-HCV and 167 (2.93%) for HBsAg 57. Our previous study showed the prevalence of HBsAg among young otherwise healthy women to be 4.5% 41,42. Our present study shows that the prevalence of HBsAg in otherwise young healthy women to be 3.6%, with 0.98% testing positive for anti-HCV antibodies. On the basis of other studies conducted in different provinces of Pakistan, we can say that there is a variation in the prevalence of HBsAg and HCV antibodies in the Pakistani population as the population sample selected is limited to a particular area or segment of the provinces.
HBV and HCV genotyping is important to track the route and pathogenesis of the virus. In particular, the variants may differ in their patterns of serologic reactivity, pathogenecity, virulence, and response to therapy. Both HBV and HCV has genetic variations which correspond to the geographic distribution and has been classified into 8 genotypes (A to H) on the basis of whole genome sequence diversity of greater than 8% and 6 genotypes (1 to 6) using restriction fragment length polymorphism analysis of the 5’ non-coding region (NCR), respectively .
In this study genotyping was carried out for 6 HBV genotypes (A through F) and 6 HCV genotypes (1 through 6). This study suggests that the HBV D genotype is the most prevalent (114/144 = 79.2%) among otherwise healthy females alone or in co-infection with other HBV genotypes in Karachi, Sindh, Pakistan. In our previous study HBV D genotype was found to be ubiquitous (100%) among otherwise healthy females alone or in co-infection with other HBV genotypes in Karachi followed by genotype B 41,42. The earlier two studies conducted for prevalence of HBV genotypes in known hepatitis B positive patients in Pakistan report the prevalence of genotypes HBV A (68%) and HBV D (100%) in the province of Sindh 58,59. Interestingly, in this study we also found the HBV D genotype to be the prevalent genotype but it was followed by genotypes HBV A (5.5%) and HBV F (4.9%). The prevalence of genotype HBV B in this study was found to be 3.5% as our earlier study has shown the prevalence of genotype B in otherwise healthy females to be 16.1% 60. These findings respectively contradict and corroborate the previous studies for HBV genotype distributions reported here as the subjects in this study were also asymptomatic but comprised of second group of female volunteer students at the two universities. Out of 144 subjects positive for HBsAg, 10 reported a previous history of jaundice and the rest were not aware of their HBV status. In the nearby north Indian population, HBV D was reported as the predominant genotype (75%) in patients diagnosed with chronic liver disease (CLDB) 60. In this study we also found other HBV genotypes existed in the study population such as HBV genotype F (4.9%) followed by genotype E (2.1%), and genotype C (1.4%). We also saw mixed HBV infections of genotypes B and D, A and D, C and D as well as B and C (20.8%, 7.6%, 2.8% and 2.8%) among these otherwise healthy females.
Among the 28 diagnosed HBV patients from Anklesaria Hospital, 67.9% showed HBV genotype D infection followed by genotype A infection (10.7%). In this group of 28 HBV positive patients we also saw infections with genotypes B (3.6%), C (3.6%) and F (3.6%). This group exhibited 10.7% co-infection with genotypes B and A.
As far as the HCV status of these 144 otherwise healthy females who were HBV positive is concerned only 2 (1.4%) tested positive for HCV antibodies by rapid immunochromatography. But the PCR results showed 39 (27.15%) of these 144 otherwise healthy females that were HBV positive for different genotypes were also positive for HCV including the 2 otherwise healthy females that tested positive for HCV antibodies by rapid immunochromatography. Of the 39 HCV positive otherwise healthy females, we found the predominant HCV genotype to be 3a (66.6%) followed by genotypes 6a (12.8%), 3b (10.3%), and 5a (10.3%) infections. The earlier study conducted with samples from women at the two universities in Pakistan had shown that among the HCV positive apparently healthy females 51.44% were genotype 3a, 24.03% exhibited a mix of genotype 3a and 3b, 15.86% were genotype 3b, and 4.80% were genotype 1b 42. Interestingly, among the group of 28 diagnosed HBV patients, the prevalence of HCV 3a genotype infection was dominant but was 7.1% much lower than that found in the otherwise healthy females, followed by infections with genotypes 2a (3.6%) and 5a (3.6%). Hence we see there is 25% co-infection of both these viruses i.e., HBV and HCV among the HBsAg positive individuals. The sample of 28 HBV positive patients was from a hospital located in the center of the metropolis that represents an area of Karachi where sanitation, malnourishment, illiteracy, and lack of awareness is very common. Prostitution can also be considered as one factor in some of the localities of Karachi in the spread of both HBV and HCV.
Conclusion:
In conclusion, genotype D appears to be the dominant HBV genotype and genotype 3a for HCV appears to be prevalent in Sindh, Pakistan’s otherwise healthy young female population as well as in HBV diagnosed individuals. Co-infection of both the viruses i.e., HBV and HCV exists among HBsAg positive individuals. The young female participants were advised to seek appropriate medical care for both their own benefit and public health benefit.
Dr David Fearnley, aged 41, is a Consultant Forensic Psychiatrist at Ashworth Hospital, a high secure psychiatric hospital in Merseyside, UK. He is also the Medical Director of Mersey Care NHS Trust, which is a large mental health and learning disability trust and one of three in England that have a high secure service. As Medical Director he is responsible for the performance of over 175 doctors, 50 pharmacists and has the lead responsibility for R&D and information governance. He is the College Special Advisor on Appraisal at the Royal College of Psychiatrists and has an interest in the development of management and leadership skills in doctors.
How long have you been working in your specialty?
I started training in psychiatry in 1994, and undertook specialist registrar training between 1998 and 2001. I became a consultant forensic psychiatrist in a high secure hospital in 2001 and medical director for the wider trust in 2005.
Which aspect of your work do you find most satisfying?
I have always found clinical work satisfying, and particularly when it becomes linked to wider service changes. I think this is why I decided to take on management responsibilities in addition to my clinical work so that I could continue to work at this interface.
What achievements are you most proud of in your medical career?
I have been particularly pleased whenever I have passed my exams and I have been able to make progress in my career. Also, in 2009 I won the inaugural Royal College of Psychiatrists Psychiatrist of the Year award, largely because of my innovative approach to involving service users and carers in their treatment.
Which part of your job do you enjoy the least?
I find that I dislike having to read poorly written reports because of the limited time available to do other things!
What are your views about the current status of medical training in your country and what do you think needs to change?
In my view, medical training in England is of an exceptionally high standard although more emphasis will need to be brought into training around management and leadership.
How would you encourage more medical students into entering your speciality?
I think medical students should be exposed to mental health services as soon as possible, to see not only the clinical aspects but appreciate the organisational structures.
What qualities do you think a good trainee should possess?
I think trainees should develop a sense of respect for everybody they work with including the service users and carers, particularly when they feel under pressure. This is, in my opinion, the hallmark of somebody who will make a great clinician.
What is the most important advice you could offer to a new trainee?
I think new trainees should create habits in terms of acquiring new knowledge (particularly evidence based knowledge) so that they build up a sense of lifelong learning that extends beyond clinical examinations.
What qualities do you think a good trainer should possess?
A good trainer should be approachable and accessible, with a willingness to challenge the status quo but also show interest in the life of the trainee.
Do you think doctors are over-regulated compared with other professions?
The medical profession is entering the phase of increased regulation through revalidation. I think this is an acceptable position in view of the enormous privilege that practicing medicine offers and the need to assure the public that doctors are fit to practise.
Is there any aspect of current health policies in your country that are de-professionalising doctors? If yes what should be done to counter this trend?
I think doctors are becoming better at identifying certain tasks that others are equally capable of undertaking. I think doctors should be continually seeking out areas of healthcare that they alone have the skills, knowledge and attitude to be responsible for.
Which scientific paper/publication has influenced you the most?
I have found the work of the Cochrane Collaboration (rather than a single publication) to influence me considerably because it made me aware, through the work of the Archie Cochrane, the importance of standing back and comparing more than one study whenever possible.
What single area of medical research in your speciality should be given priority?
I think the overlap between mental illness and personality disorder is not understood well enough and yet is a major reason for patients remaining in secure care longer than perhaps they might need to in the future.
What is the most challenging area in your speciality that needs further development?
As a medical manager, I think that more needs to be done to encourage doctors to see management and leadership as part of their role as a professional and to gain competencies and confidence in these areas during their undergraduate and postgraduate training.
Which changes would substantially improve the quality of healthcare in your country?
Healthcare delivery in the UK is undergoing change following the publication of the coalition government’s White Paper in health, and it is encouraging clinicians, particularly GPs, to take part in commissioning. I think this, alongside a focus on better outcome measures is likely to improve the quality of healthcare.
Do you think doctors can make a valuable contribution to healthcare management? If so how?
I think doctors are in a unique position following years of clinical training to make decisions in terms of management and leadership. They should be able to transfer their ability to manage particular cases over time to managing projects and resources both in operational and strategic terms.
How has the political environment affected your work?
The NHS has an element of political oversight that does influence the work, particularly in the high secure service where public protection is a key factor.
What are your interests outside of work?
My time outside work is spent almost exclusively with my family.
If you were not a doctor, what would you do?
I would like to be a writer (although I doubt I have the skills to do so successfully!)
The Department of Health’s Modernising Medical Careers (MMC) has been uniformly implemented into specialty training across the United Kingdom (UK). This began with the controversial and subsequently redundant Medical Training Application System (MTAS) selection process in Spring 2007, and ended with the first MMC specialty training posts commencing in August 2007. During the application process itself one preliminary study reported that 85% of candidates demonstrated decreased levels of enjoyment in their work, and 43% caring less about patient care.1 The emergency introduction of the ‘golden ticket’ Round 1b guaranteed interview - though arguably justified in the face of a flawed application system - was a cause of further discontent and division amongst junior trainees and the consultants responsible for appointing them.
For surgical training in particular, the advent of the MMC initiative combined with the European Working Time Directive (EWTD) represents an estimated 50% reduction in the amount of specialist training hours when compared to the previous system.2 This has raised concerns not only from current consultants, but also from the already increased number of surgical trainees having to share the same caseload. A previous survey of Ear, Nose, and Throat senior house officers reported 71% were willing to opt out of the EWTD to safeguard their training and patient care.3
In the Oxford Deanery the selection process of shortlisted surgical trainees in Rounds 1a and 1b consisted of six stations assessing curriculum vitae, portfolio, clinical examination, data interpretation, and pre- and post-operative management (totalling one hour). Candidates were offered generic or specialty themed Core Training (CT) posts at Speciality Training (ST) 1 or 2, or Fixed Term Speciality Training Appointments (FTSTA) 1 or 2, depending upon the candidate’s ranking at interview (plus application form for Round 1a) irrespective of speciality preference. Following acceptance, individual appointments were made based on candidates ranking job preferences. Round 2 appointments were made at a local level via traditional selection methods. The most recent information from the deanery states that those trainees who received an offer of run-through training in the region will be guaranteed an interview for an ST3 post in surgery, however individual specialty preference and job allocation will be determined by re-ranking based on continuous appraisal during the core surgical training years, further Higher Specialist Training interviews, and training numbers available.
The media coverage that surrounded MTAS clearly highlighted the dissatisfaction amongst trainees and consultants leading up to and during the application process,4, 5 but no study has yet assessed the views of surgical trainees following the start of their new MMC-based training posts. This survey aimed to obtain the views and outcomes of core surgical trainees in the Oxford Deanery.
Methods
At three and nine months following the commencement of speciality training posts, questionnaires were distributed to junior surgeons (CT 1-2) in the Oxford Deanery School of Surgery. Questions were structured to obtain information about level of experience and qualification(s), current and desired surgical speciality, job satisfaction, attitudes towards ‘run-through’ training and levels of support. In the Oxford Deanery there were 40 appointments at CT1 (18 ST1 and 22 FTSTA), and 29 at CT2 (17 ST2 and 12 FTSTA) in August 2007. Data were expressed as the mean ± standard deviation (SD). Statistical comparison was performed using Mann-Whitney’s U test, with the significance level at p<0.05.
Results
The questionnaire was completed by a total of 46 and 45 surgical trainees at three and nine months respectively. At the three-month time point this represented 67% of all trainees in the Oxford Deanery School of Surgery (male: female, 33:13) and included 11 at ST1, 16 at ST2, 11 at FTSTA1, and 8 at FTSTA2. Of these 52% (n=24) had obtained their post via Round 1a, 41% (n=19) via Round 1b, and 7% (n=3) via Round 2. At both CT1 (ST1 & FTSTA1) and CT2 (ST2 & FTSTA2), trainees were on average 3.7 ± 1.9 years post graduation (from time surveyed; CT1 range 1-11 years, CT2 range 3-8 years); 16% (n=7) of all trainees had previously studied Medicine at Oxford University, and 93% had studied medicine in the UK. (Figures 1a, b). Most popular desired specialties at three and nine months are displayed in figure 2. Of the 46 respondents, all had worked in the speciality of their career choice during the course of the year.
Figure 1a. Number of trainees selected in each MTAS round
Figure 1b. Surgical trainee graduating medical school distribution
At time of appointment, 52% of trainees had completed the Membership to the Royal College of Surgeons (MRCS) exams, and 35% (n=16) of all trainees had completed a higher degree. (Figure 3). Furthermore, 22% (n=10) felt that there should be a further exam in addition to the MRCS to rank candidates for appointment to higher specialist training (ST3 onwards), with half of this number having already obtained their MRCS.
Figure 2. Desired surgical specialty at three and nine months
Figure 3. Trainee postgraduate qualifications at time of appointment
Those who had been allocated to ‘run-through’ ST posts were more satisfied with the concept of run-through training than those in FTSTA posts (where scores were assigned on a scale from 1 - very unsatisfied, to 5 - very satisfied), with the mean score at three months for ST trainees 4.1 ± 1.4, and FTSTA trainees 2.0 ± 1.4 (p<0.01), and at nine months 3.7 ± 1.1 for ST trainees versus 2.1 ± 1.1 for FTSTA trainees (p<0.01). Job satisfaction levels between these two groups of trainees were similar: at three months, mean score 3.5 ± 1.3 in ST posts versus 4.1 ± 0.8 in FTSTA posts (p>0.05), and at nine months, mean score 3.5 ± 1.0 in ST posts versus 3.2 ± 1.3 in FTSTA posts (p>0.05). In addition, a similar comparison between ST and FTSTA trainees was found when determining if trainees had thought about leaving surgery. On a scale where a score of 1 – never thought of leaving surgery to 5 – very frequently thought of leaving surgery, the mean score at three months was 2.3 ± 1.4 for ST trainees versus 3.0 ± 1.6 for FTSTA trainees (p>0.05), and at nine months 2.2 ± 1.4 for ST trainees versus 2.9 ± 1.5 for FTSTA trainees (p>0.05). (Figures 4a, b).
Figure 4a. Trainee attitudes at three months
Figure 4b. Trainee attitudes at nine months
In fact, 43% (n=20) of all trainees surveyed reported having enquired about surgical training in another country, with 4% (n=2, both UK Medical School graduates) stating that if unsuccessful in securing a training post in their desired specialty for August 2008, they would move abroad to train.
At three months, 9% (n=4) of all trainees felt well-informed about what will happen in the future regarding their training, with 20% (n=9, ST to FTSTA ratio 2:7) responding that had they been better informed prior to August 2007, then they would not have accepted their current post, and 28% (n=13) felt well-supported by their senior colleagues with regard to their future training. However at nine months from appointment, 69% (n=29) of all trainees felt well informed, and nearly two thirds well supported by their seniors (n=27). (Figure 5). Ninety three percent (n=43) of applicants wished to remain in the region for their future training, with 61% (n=28) having initially selected Oxford as their first choice deanery.
Figure 5. How well informed and supported trainees felt at three and nine months
The majority of both ST2 (85%, n=11) and FTSTA2 (71%, n=5) trainees secured ST3 posts from August 2008, mainly within the Oxford Deanery, and all within their desired surgical specialty. All ST1 (n=16) trainees successfully moved into ST2 posts, and the majority of FTSTA1 (78%, n=7) trainees secured CT positions. (Table 1).
Grade (n)
August 2008 Post (n)
ST1 (16)
ST2 (16)
FTSTA1 (9)
CT1 (3) CT2 (4) FTSTA (2)
ST2 (13)
ST3 (11) Research Fellow (1) GP Trainee (1)
FTSTA2 (7)
ST3 (5) ST1 Radiology (1) CT2 (1)
Table 1. ST2 and FTSTA2 trainee outcomes from August 2008
Discussion
MMC has and will have profound implications on the way junior doctors will henceforth be trained in the National Health Service (NHS). Last year’s difficult introduction into specialist training, has for obvious reasons, directly affected the perceptions of trainees having to negotiate their careers through the ‘transition’ period.1, 6 This survey provides an interesting insight into the demographics, current viewpoints, and outcomes of the first cohort of MMC surgical trainees in the Oxford Deanery.
Just over half of all trainees in the survey were appointed after Round 1a (52%, n=24) of which two thirds (n=16) were to ST posts: a further 41% (n=19) were appointed after Round 1b, of which roughly half (n=9) were to ST posts. This highlights the large number of very good surgical trainees that may have been left unemployed had MTAS interim measures not been introduced to permit all candidates the opportunity of at least one interview, and that in the Oxford Deanery at least, candidates were given an equal chance of obtaining a ‘run-through’ post between the two rounds. Despite MMC person specifications at the time of application stating that MRCS was not an absolute requirement for entry at ST1-2, 52% (n=24) had completed their MRCS, with a further 20% (n=9) having completed at least Part I or more.
Overall job satisfaction levels were good amongst all trainees (mean score 3.7 ± 1.1), with 57% (n=26) still agreeing with the concept of ‘run-through’ training, and hence MMC. This view is maintained despite the problems associated with last years application process, and in the face of an uncertain future. However, nearly half (43%, n=20) of trainees had enquired about training abroad, with several committed to leaving the UK next year if unable to obtain their desired surgical specialty. With the average cost to train a UK medical graduate being at least £150,000,7 and the amount of dedication and effort needed to embark on a surgical career thereafter, care must be taken to improve morale amongst junior surgeons, and to provide adequate and timely information. Encouragingly, between the two time points surveyed, levels of senior support and how well informed surgical trainees felt with regards to their training, increased from 28% to 60% and from 9% to 69% respectively; this may be secondary to a combination of extensive effort from the Deanery and the Royal College of Surgery to address trainee concerns.
The realistic future of those in FTSTA posts is cause for concern. This is highlighted in the recently released Tooke Report, in which it is stated they are “in danger of becoming the next ‘lost tribe’, the very category of doctor MMC sought to avoid”, but at the same time that “core [training] should not repeat the errors of previous SHO arrangements and must be time limited”.6 Those in FTSTA posts face higher levels of future uncertainty than their ST colleagues, and this was reflected in reporting a higher likelihood of consideration of alternative careers outside of surgery. However, both groups of trainees demonstrated statistically similar scores when questioned about how frequently they had thought of leaving surgery (2.3 ± 1.4 for ST trainees versus 3.0 ± 1.6 for FTSTA trainees, p>0.05), and 71% of FTSTA2 trainees surveyed within the Oxford Deanery went on to secure ST3 level posts in their desired specialty.
The authors note the limitations inherent to surveys in general namely the validity and reliability of responses obtained to questions asked due to the self-report method of data collection, the questionnaire entirely constructing the information obtained, and that the data does not capture the decision process that produced the observed outcomes and is therefore descriptive rather than explanatory. More specifically, the authors note that candidates who were successful in obtaining an ST3 post may have been more likely to complete the questionnaire, leading to further potential bias.
Conclusion
MMC has crossed the threshold into higher specialist training, and the first cohorts of MMC surgeons are being trained. The majority of trainees we surveyed expressed good levels of job satisfaction, had successfully negotiated their first year of the new system, and encouragingly felt better informed and supported over the course of their first year. However, this study encompassed a proportion of surgical trainees in one Deanery in the UK, and further study on a larger scale at regular time intervals is certainly warranted. Consequent to the problems of MMC’s difficult introduction, positive steps included travelling tours by the Royal College of Surgeons (England), and in the Oxford Deanery at least, regional meetings to address concerns and expectations, and outline the realistic future for surgical trainees. Perhaps a key determinant of sustainability for MMC in surgery in 2008 and beyond will be the relative success of the Intercollegiate Surgical Curriculum Programme (ISCP), and this represents a significant area for further study.
Vitiligo is one of the oldest and commonest skin disorders affecting approximately 1-2% of the human population.1 The disease shows no regard to the ethnic, racial or socioeconomic background of the affected sufferers. The cosmetic impact of this disease is tremendous and its psychological impact devastating particularly in coloured races.2,3,4 The aetiopathogenesis of this disease is now much better understood (table 1)5 compared with a decade earlier but much remains unknown. In parallel with these developments on the aetiological front, a lot of new advances have been made on the therapeutic front as well. With these new therapeutic options, we are currently in a much better position to treat this disease than we were a decade or two earlier. So, how far and how satisfactorily are we able to treat this disorder now? What are the new treatment options available for this disorder and how far have they helped a dermatologist to claim a cure for this disorder? These are some of the questions that will be addressed in this paper.
New advances in management
Medical therapies
The most recent advances on the medical front have been Narrowband Ultraviolet B (NB-UVB) therapy, Targeted Ultraviolet B (UVB), Excimer laser therapies, topical immunomodulator treatment in the form of topical calcineurin inhibitors, topical pseudocatalase, and topical Vitamin D analogues in combination with Ultraviolet (UV) light.
NB-UVB
NB-UVB, using UV-lamps with a peak emission of around 311nm has now emerged as the treatment of first choice in generalized vitiligo as well as vitiligo vulgaris (patchy vitiligo).6,7,8 The efficacy of NB-UVB in vitiligo was first demonstrated by Westerhof and Nieuwboer-Krobotova in 1997.9 Since then there have been a large number of clinical studies that have demonstrated the therapeutic benefit of NB-UVB in vitiligo patients. The mechanism of action of NB-UVB in vitiligo is through induction of local immunosuppression and stimulation of the proliferation of melanocytes in the skin and the outer root sheath of hair follicles.6 There is a stimulatory effect on melanogenesis and on the production of Melanocyte Stimulating Hormone (MSH).6 Comparison studies have shown a significantly enhanced rate of repigmentation with NB-UVB compared with topical Psoralen and Ultraviolet A (PUVA) therapy.10 Furthermore, the incidence of adverse effects seen commonly with topical PUVA, such as phototoxicity, is significantly reduced with the use of NB-UVB.
NB-UVB has shown a number of advantages over PUVA in vitiligo patients in addition to its excellent efficacy. These advantages include its extremely low side-effect profile particularly on the systemic front, its established safety in children, and safety in pregnant females. NB-UVB also has considerably better patient compliance as there is no need to time the exposure with any drug intake or any need for eye protection beyond treatment exposure time. A recent double-blind randomized11 study comparing NB-UVB with PUVA demonstrated a much better efficacy with NB-UVB. The study found that repigmentation achieved with NB-UVB was much better with respect to colour matching with uninvolved skin, and this was also more persistent than that achieved with PUVA.11
In addition NB-UVB has been used in childhood vitiligo with excellent results.12 No additional adverse effects were seen in children with NB-UVB as compared with those in adults. Furthermore, given the long-term safety profile of NB-UVB in comparison with PUVA as far as skin malignancies are concerned,13 NB-UVB is now preferred over all other treatment options in the management of generalized vitiligo in both adults and children.
Table 1: Aetiological hypothesis of vitiligo5
Aetiological hypothesis
Brief explanation
Autoimmune hypothesis
Believes that vitiligo occurs because of destruction of melanocytes by an immune mechanism.
Most favoured theory at present, supported by many recent in-vitro studies.
Auto-cytotoxic hypothesis
Believes that vitiligo occurs because of accumulation of toxic metabolites in the melanocytes secondary to a defect in their metabolic clearance of the toxins.
Neurogenic hypothesis
Believes that vitiligo is because of an altered reaction to neuropeptides, catecholamines and their metabolites by epidermal melanocytes.
Biochemical hypothesis
Believes that over-secretion of hydrobiopterin, a cofactor of tyrosine hydroxylase results in accumulation of catecholamines that in turn results in formation of reactive oxygen species in the melanocytes. These reactive oxygen species are thought to cause destruction of affected melanocytes in vitiligo patients.
NB-UVB has been used in combination with different topical agents to increase its efficacy and thus shorten the total duration of treatment. Treatment options that have been used with NB-UVB in vitiligo till date include topical tacrolimus,14,15 pimecrolimus,16 Vitamin D analogues17,18 and even topical pseudocatalase.19 While some studies have shown a synergistic effect with these combinations, others have found the efficacy of the combinations to be similar to NB-UVB alone. In one half-body comparison study, topical placental extract was used in combination with NB-UVB but the combination was shown to offer no added benefit than NB-UVB alone.20 Therefore, the ideal topical agent to be combined with NB-UVB remains unknown.
Laser Therapy
Excimer laser, which uses Xenon-Chlorine (Xe-Cl) gas and produces a monochromatic laser light of 308nm wavelength, is another innovative treatment option for vitiligo. The laser system has been used with increasing frequency over the last few years for targeted treatment of individual vitiligo lesions.21 The laser is used either alone or in combination with topical immunomodulator or PUVA-sol therapy.22,23 Treatment with this laser is claimed to give extremely good and early results in both localized and segmental vitiligo. In a pilot study21 on 18 patients with 29 affected areas 57% of lesions showed varying degrees of repigmentation after just six exposures over two weeks. The figure was increased to 87% after 12 treatments over four weeks.21 Another recent study has reported a repigmentation of >75% in 61% of lesions after 30 treatments with Excimer laser. Repigmentation was found to be better on the face and trunk than on the extremities.24
Topical therapies, particularly topical tacrolimus, have been used in combination with Excimer laser. This combination has been claimed to be more effective than Excimer laser alone.22 In a randomized right-left comparison study22 with 14 patients, Excimer light monotherapy was compared with a combination of Excimer laser with topical tacrolimus. While 20% of lesions treated with Excimer laser alone achieved >75% repigmentation, the same degree of repigmentation was obtained in 70% lesions with the combination treatment.22 Topical methoxsalen has also been used in combination with Excimer laser phototherapy and this has been claimed to have worked better than laser therapy alone.23
The advantage of Excimer laser therapy over conventional UVB therapy is the targeted mode of treatment with no exposure of the uninvolved skin. Moreover, the onset of repigmentation is earlier with Excimer laser therapy than with UVB therapy.
Targeted UVB therapy
This is another recent innovation in vitiligo management that has arrived over the last few years. The beauty with this therapy is that it delivers high intensity UVB light only to the affected vitiliginous areas, avoiding any exposure to the uninvolved skin. This not only decreases the cumulative UVB dose received by an individual patient, but is also claimed to improve the efficacy of treatment quite significantly.
Targeted UVB therapy, as expected, finds its use more in the treatment of focal and segmental types of vitiligo. In fact, the first study25 with targeted UVB therapy was done on eight patients with segmental vitiligo. Five of these patients achieved >75% repigmentation of their lesions with this therapy.25
Targeted UVB therapy offers certain advantages over Excimer laser phototherapy. The treatment is safer and more efficacious compared with conventional UVB therapy, and almost as efficacious but much less costly than Excimer laser therapy.26
Systemic immunomodulator therapy
Vitiligo is thought to be an immune-mediated disease and thus immune-suppressive and immunomodulator agents have been used on a regular basis in this disease. Among the immunosuppressants, systemic steroids have been the most commonly used. However, systemic steroid therapy has always been associated with a high incidence of adverse effects especially in children which is the age-group most commonly affected. To overcome this limitation, steroids have been given in pulse or even in mini-pulse form. A prospective study involving 14 patients with progressive or static vitiligo showed cessation of disease activity and a repigmentation rate of 10-
60% after high-dose methylprednisolone pulse therapy administered on three consecutive days.27 Systemic steroids have also been administered in a mini-pulse form on two consecutive days every week, known as Oral Minipulse (OMP) therapy. The first study demonstrating the efficacy of OMP with oral betamethasone (0.1mg/kg with a maximum of 5mg) was described in 1991.28 In a later study29 on childhood vitiligo, betamethasone was replaced by oral methylprednisolone and combined with topical fluticasone ointment on the vitiligo lesions. The disease was arrested in >90% of patients, and >65% of children achieved good to excellent (>50%) repigmentation of their vitiligo lesions.29
Topical Vitamin D analogues
Vitamin D analogues, particularly Calcipotriol, have been used topically either alone or in combination with topical steroids in the management of vitiligo. The basis for the use of these agents is that Vitamin D3 affects the growth and differentiation of both melanocytes and keratinocytes. This has been further proved by the demonstration of receptors for 1 alpha-dihydroxyvitamin D3 on the melanocytes. These receptors are believed to have a role in stimulating melanogenesis.29 Vitamin D analogues have given variable results in the treatment of vitiligo in different studies. These agents have also been used in combination with UV-light (including NB-UVB) and topical steroids with variable results.30,31,32
Topical immunomodulators
Topical immunomodulators, such as tacrolimus and pimecrolimus, have been the most promising recent additions to topical vitiligo therapy. In fact because of their efficacy and a remarkable safety profile the use of these agents in vitiligo has shown a consistently increasing trend over the last few years. These agents can be safely administered in young children, as they don’t cause any atrophy or telangiectasia of the skin even after prolonged use. There is also no risk of hypothalamic-pituitary-adrenal (HPA) axis suppression as seen with the widespread use of potent topical steroids.33 The first study that demonstrated the efficacy of tacrolimus in vitiligo was published in 2002.34 In this study tacrolimus was used in six patients with generalized vitiligo and five of them achieved >50% repigmentation of their lesions by the end of study period.34 Since then many additional studies have been published on this subject and have clearly demonstrated the role of topical tacrolimus in vitiligo. The best results with topical immunomodulator therapy have been seen on exposed parts of the body such as the face and neck and, as with any other therapy, the acral parts of the body respond the least.34,35 Similar results were obtained with the use of topical pimecrolimus in vitiligo patients.36
Pseudocatalase
Pseudocatalase has been used in combination with Dead Sea climatotherapy or UVB exposure for the treatment of vitiligo. The basis for the use of this agent in vitiligo is the evidence of oxidative stress and high H2O2 levels in the lesional skin.37 While some earlier studies37 demonstrated excellent results with this agent in inducing repigmentation in vitiligo, later studies have cast doubts on its efficacy.38 Pseudocatalase is used topically on the lesional skin, and this is followed by UVB exposure to the whole body or to the lesional skin. The combination is claimed to correct the oxidative stress on melanocytes in vitiligo patients and thus lead to correction of the depigmentation.
Topical 5-Fluorouracil
Topical 5-fluorouracil is supposed to induce repigmentation of vitiligo lesions by overstimulation of follicular melanocytes which migrate to the epidermis during epithelialization.39 This form of topical therapy can be combined with spot dermabrasion of the vitiligo lesions to improve the repigmentation response. In a study by Sethi et al,40 a response rate of 73.3% was observed with a combination of spot dermabrasion and topical 5-fluorouracil after a treatment period of six months.40
Surgical therapies
Surgical therapies for vitiligo have further increased the percentage cure of the disease by an appreciable degree, with the consequent increase of their use in the management of unresponsive vitiligo both in India and abroad. These surgical therapies, as a rule, are indicated in those patients who have a stable (non-progressive) disease of at least one year and not responding to medical treatment. In general the most important advantage with these procedures is that the chances of repigmentation of lesions are in the range of 90-100%. Moreover, these interventions are becoming better and easier to perform with every passing day.
Different surgical therapies that have been attempted in the management of vitiligo include autologous suction blister grafting, split-thickness grafting, punch grafting, smash grafting, single follicular unit grafting, cultured epidermal suspensions and autologous melanocyte culture grafting. All these grafting procedures, except the melanocyte culture grafting, are easy to perform and do not require any sophisticated instruments. These grafting techniques have now been divided into two types, tissue grafts and cellular grafts, depending on whether whole epidermal/dermal tissue is transplanted or the individual cellular compartment.
Tissue grafting technique
Suction blister grafting
Here, thin epidermal grafts are taken from suction blisters on the donor site, usually on the buttocks or thighs. These suction blisters are produced by applying sufficient negative pressure on the skin at the donor site by using a suction apparatus or syringes with three-way cannulae. The epidermal grafts are then transplanted on to dermabraded vitiligo lesions. This leads to repigmentation of the recipient areas with an excellent cosmetic matching. The ease of the procedure, the high success rate and the excellent cosmetic results have all made suction blister grafting the procedure of choice in vitiligo grafting.41
Split thickness grafting
In this grafting technique a thin split thickness graft is taken from a donor site with the help of a dermatome, Humby’s knife, Silver’s knife or a simple shaving blade. This graft is then transplanted on to dermabraded recipient areas. This technique also gives excellent cosmetic matching after repigmentation and the incidence of repigmentation in this technique is also quite high. In fact, most comparison studies on grafting techniques in vitiligo have shown that maximum repigmentation is achieved with either suction blister grafting or split thickness grafting.41 The advantage of partial thickness grafting over the suction blister method is that a relatively larger area of vitiligo can be tackled in a single sitting. Both partial thickness skin grafting as well as suction blister grafting can be followed up by NB-UVB to achieve faster and better results.
Miniature punch grafting
Here full-thickness punch grafts of 1.0 to 2.0 mm diameter are taken from a suitable donor site and then transplanted on to similar punch shaped beds on the recipient vitiligo lesions. The recipient area is then treated with either PUVA/PUVA-sol or topical steroids leading to spread of pigment from the transplanted punches to the surrounding skin. With time the whole of the recipient area gets repigmented. The advantages of this procedure are that it is easy to perform and can take care of a relatively larger vitiligo area compared with the above two procedures. Also vitiligo lesions with irregular or geographical shapes can be treated with this procedure. However there are certain limitations. There is the risk of ‘cobblestone appearance’, ‘polka-dot appearance’, and hypertrophic changes at the recipient site.42 All these side effects can be minimized by proper patient selection and by use of smaller sized punches of 1.0 to 1.5 mm diameter. Miniature punch grafting is presently the commonest surgical procedure performed in India on vitiligo patients.
Follicular unit grafting
In this technique, single-hair follicular units are harvested/prepared from a suitable donor area as in the case of hair transplantation. These follicular units are then cut above the level of the follicular bulb and then transplanted into vitiligo lesions. The idea behind this technique is that the melanocytes in the follicular unit are ‘donated’ to the vitiliginous skin and serve as a source of pigment at the recipient site. The repigmentation process here simulates the normal process of repigmentation of vitiliginous skin quite closely and thus gives an excellent cosmetic result. This procedure combines the advantages of punch grafting with the excellent cosmetic results of split thickness or blister grafting techniques.43 The procedure is however tedious and needs good expertise on the part of the cosmetic surgeon.
Smash grafting
In this technique, a partial thickness graft is taken and is ‘smashed’, or cut into very small pieces, by means of a surgical blade on a suitable surface such as a glass slide. This ‘smashed’ tissue is then transplanted on to the dermabraded recipient skin and covered with a special powder or corrugated tube dressing so as to keep the smash-graft undisturbed on the recipient area. The advantage of this technique, over a simple partial thickness grafting, is that thicker grafts can be used with a good cosmetic result. The procedure has been indicated for those who are relatively inexperienced and cannot take an ideal, thin and transparent partial thickness graft from the donor area.44
Cellular grafting techniques
Non-cultured epidermal suspensions
Here a split-thickness graft is taken from a donor area and then incubated overnight. On the next day the cells are mechanically separated using trypsin-EDTA solution and then centrifuged to prepare a suspension. This cell suspension is then applied to the dermabraded vitiligo lesions, and a collagen dressing is applied to keep it in place. A relatively large area of vitiligo, about ten times the size of the donor graft can be taken care of with this procedure.45 The recipient area however has to be treated with either NB-UVB or PUVA for two to three months to achieve the desired pigmentation.
Melanocyte culture transplantation
This is a relatively more advanced grafting procedure where, once again, a split-thickness graft is taken from a donor area and incubated in an appropriate culture medium to grow the melanocytes or the keratinocytes-melanocyte combination in vitro. The cultured cells are then applied onto laser dermabraded, or even mechanically abraded, lesional skin.46,47 The procedure is obviously more difficult to perform, as it needs the advanced laboratory facilities for melanocyte culture. However the results with this procedure are excellent and a relatively large area of involved skin can be tackled by a single donor graft.
Summary
Table 2 summarises the above discussion of treatment options in vitiligo.
Table 2: New treatment options in vitiligo
Medical therapies and phototherapy
Surgical therapies
Narrowband UVB therapy either alone or in combination with immunomodulators, Vitamin D analogues etc.
Liver abscess accounts for 48% of visceral abscesses 1 and presents with significant morbidity and mortality. The overall incidence of pyogenic liver abscess is 3.6 per 100,000 populations, 2 however; elevated pancreatic enzymes within the content of a liver abscess have never been reported in the literature.
CASE REPORT:
A 36-year-old African American male with a history of chronic pancreatitis presented to the emergency department for abdominal pain in the epigastric area along with nausea, vomiting, diarrhoea, fever. His symptoms began 3-4 days before presentation. The abdominal pain was dull in nature and 6/10 in intensity, non-radiating. His past medical history was significant for HTN, diabetes mellitus and chronic diarrhoea secondary to chronic pancreatitis.
On admission the patient was alert and oriented, blood pressure was 97/44 mm Hg, heart rate 16 beats per minute, respiration 16 per minute, oxygen saturation 94% on room air and temperature 102*F. Abdominal examination revealed hyperactive bowel sounds and tenderness in the epigastrium & RUQ. Liver span was 14 cm. The rest of the examination was unremarkable.
Laboratory work revealed: Haemoglobin 9.8 g/dl, WBC 22.1 Thou/mm3 with segmented neutrophils of 81% and 9% bands, BUN 54 mg/dl, Cr 4.7 mg/dl, total protein 10.4 g/dl, albumin 1.8 g/dl, total bilirubin 1.1 mg/dl, direct bilirubin 0.3 mg/dl, AST 98 IU/L, ALT 38 IU/L, alkaline phosphatase 250 IU/L, amylase 81 units/L, lipase 10 units/L, lactate 2.3 mmol/L and INR 1.39.
The patient was started on fluids and meropenem for broad spectrum coverage. However his condition worsened and he developed acute respiratory distress syndrome secondary to sepsis necessitating intubation. Due to his abdominal pain he underwent a computer tomography (CT) scan of the abdomen, which revealed pancreatic calcifications and multiple liver abscesses; the largest measuring 7.5cm in the right lobe of the liver (Figure 1).
Figure 1
As the patient’s condition did not improve, he underwent liver abscess drainage. Fluid analysis showed ph 4.0, LDH 39 units/L, glucose 81 mg/dl, protein 1.6 g/dl, lipase of 16 units/L and amylase 68 units/L. The presence of amylase and lipase in the liver abscess without any evidence of a fistula between liver and pancreas on CT scan was unexpected, therefore it was decided to leave the catheter in situ for continuous drainage. 3 Even though his blood and fluid cultures remained negative during the hospital stay he was continued on antibiotics, which may have meant that the initial antibiotic therapy rendered the blood cultures negative. The success of management was assessed with a hepatic CT 10 days post drainage and was demonstrated by the observation of improvement in the patient’s general condition, as indicated by normal temperature, decreased draining catheter output and the resolution of deranged laboratory values. The catheter was then removed and the patient was discharged.
DISCUSSION:
Liver abscesses develop via seeding through portal circulation, directly via spread from biliary infections or from surgical or penetrating wounds and also from systemic organs via haematogenous spread. In our case the most reasonable explanation was through the involvement of portal circulation due to recurrent pancreatitis.
The morbidity and mortality rate for liver abscesses ranges from 2 – 12 % depending on the severity of underlying co-morbidities. 2 The clinical manifestations, as in our case, are characterized by abdominal pain (50-75%), 4, 5 fever (90%), nausea and vomiting. Other symptoms may include weight loss, malaise and diarrhoea. On physical exam RUQ tenderness, guarding, rebound tenderness, hepatomegaly and occasional jaundice can be appreciated. The diagnosis of a liver abscess can be made by radiographic imaging followed by aspiration and culture of the abscess material. Liver abscesses can be either polymicrobial or monomicrobial, unlike in the case with our patient, whose abscess was sterile. Depending on the microbial results additional sources of infection should be evaluated. Drainage of abscesses can be percutaneous 6 or open surgical. Percutaneous drainage with coverage of antibiotics was successful in our patient.
CONCLUSION:
In summary, we present a case of pancreato-liver abscess in a patient with a history of chronic calcified pancreatitis. It was treated with antibiotics and percutaneous drainage, with satisfactory resolution. To our knowledge this has never been reported in the literature and more work needs to be done to understand the pathophysiology of elevated pancreatic enzymes in the context of a liver abscess in a patient with a history of chronic pancreatitis.
Myocardial ischemia from coronary artery vasospasm can lead to variety of presentation including stable angina, unstable angina, myocardial infarction and sudden death 1. Although, pathognomic clinical scenario includes symptom of chest pain, transient ST-segment elevation on the electrocardiogram(ECG), and vasospasm on a coronary angiography, atypical presentations have also been reported 2. Various known physiological factors including stress, cold, hyperventilation and pharmacological agents including cocaine, ethanol, 5-Fluouracil, and triptans can precipitate a vasospastic attack. 3-7.We report a case of ST-segment elevation due to right coronary artery vasospasm, in patient with hypoxic respiratory failure, and successful treatment with calcium channel blockers.
Case description
A 56 year old man was admitted for the repair of a large ventral incisional hernia. The patient had a prior history of morbid obesity, chronic obstructive pulmonary disease (COPD), hypertension and cigarette smoking. The postoperative course was complicated by bilateral pneumonia leading to respiratory failure requiring mechanical ventilation. An electrocardiogram at the time of intubation was essentially normal. Aside from bilateral rhonchi and crackles on lung auscultation, the rest of the physical examination findings were unremarkable. Arterial blood gases at the time of intubation demonstrated PH 7.33, PO2 58 mmHg, PCO2 65 mmHg, HCO3ˉ 20 mmol/L, suggestive of hypoxia and concomitant respiratory acidosis. Baseline laboratory studies including cardiac enzymes were within normal limits. The patient was treated with intravenous vancomycin for methicillin-resistant staphylococcus pneumonia. On postoperative day 4, the patient had recurrent episodes of transient ST-elevation on a bedside monitor (Fig.1).
Figure 1
These episodes lasted for 3-5 minutes and were associated with significant bradycardia and hypotension. In view of recurrent episodes, haemodynamic instability, and underlying risk factors of coronary artery disease, cardiac catheterization was performed. Coronary angiography revealed a 90% stenosis with haziness of the mid-right coronary artery without any other significant epicardial disease. An intravascular ultrasound (IVUS) was performed and was followed by administration of 100 mcg of intracoronary nitroglycerin; the lesion was reduced to almost 20%. (Fig.2).
Figure 2
The diagnosis of prinzmetals angina was made, based on clinical course and angiographic results and prompt therapy with diltiazem (120 mg per day) was initiated. The patient had no further recurrences of similar episodes during the hospitalization and on follow up at 3 months.
Discussion
The prevalence of vasospasm has been reported to be higher in Japanese and Korean population as compared to the western population. A recent multi-institute survey in Japan documented spasm in 921 (40.9%) of the 2251 consecutive patients who underwent angiography for angina pectoris 8. In contrast to the traditional risk factors for atherosclerotic coronary artery disease, the incidence of smoking, age and dyslipdaemia has been reported higher in patients with coronary vasospasm 9. Endothelial dysfunction is now considered to the major inciting factor in the pathogenesis of the vasospasm 10. Vasospastic angina (VA) with normal coronary arteries on the angiography, impaired endothelial-dependent and endothelial-independent vasodilatation has been frequently observed in these patients. Vascular tone is normally regulated by production of vasodilator factors like nitrous oxide (NO), prostacyclin and vasoconstricting agents like endthelin-1. In the presence of dysfunctional endothelium the agents that normally cause vasodilatation lead to paradoxical vasoconstriction, due to direct muscle stimulation, like acetylcholine.
Stress, whether physical or mental stress has been shown to induce coronary vasospasm and myocardial ischemia. In a study by Kim et al, coronary spastic angina was diagnosed in 292 patients out of 672 coronary spasm provocation tests. Among 292 patients, 21 (7.2%) had myocardial infarction and 14 out of these 21 had experienced severe emotional stress before the event 11. Recently, animal studies have also shown that high circulatory level of stress hormones (cortisol) exaggerate coronary vasoconstriction through Rho-Kinase activation 12. Hypoxia has been seen in animal models to predispose to vasospasm through superoxide formation, which leads to loss of vasodilator function of NO.(13)
The ECG changes that occur during attack include ST-segment elevation, and or peaking of T wave from total or subtotal coronary occlusion 1. In some cases spasm can involve more than one artery leading to ST-segment elevation in multiple leads, which may predispose to ventricular tachycardia or fibrillation 14. Coronary spasm is diagnosed by angiography, and spasm can occur at the site of atherosclerotic plaque or in normal segment of the coronary artery. In patients with equivocal diagnosis, provocative tests including administration of acetylcholine, hyperventilation to induce spasm may be required for the diagnosis.
Current first line therapy involves used of calcium channel blockers (CCB) alone or in combination with long acting nitrates. In a study comparing the effect of long acting nitrates (Isosorbide dinitrate 40mg/day) versus calcium channel blockers (amlodipine 5mg/day or long acting nifedipine 20mg/day) on coronary endothelium and vasoconstriction between patients with normal or minimally diseased coronary artery, treatment with long acting nitrates was associated with less favourable effects on coronary endothelial functions 15. Sudden withdrawal of CCB in patients with known vasospasm can lead to rebound of symptoms and may prove dangerous. In patients with refractory symptoms alpha-blockers, nicorandil have been used. Although beta blockers are believed to enhance vasospasm, Betaxalol, a selective beta-1 blocker, has been found to be effective in the treatment of variant angina due to its vasorelaxing effects 16. In addition, elimination of or control of all other risk factors or precipitants is very important for successful treatment. In drug refractory cases the percutaneous coronary intervention or coronary artery bypass graft has been performed for the ischemia relief 17.
Our patient had multiple precipitating factors for vasospasm. Endothelial dysfunction from severe physical illness and sepsis could have precipitated the VA. Furthermore, hypoxia from respiratory failure could have also been an inciting agent and cannot be ruled out. It is worth mentioning that intensive care unit patients frequently have coexistence of both the underlying risk factors and the precipitating factors for vasospasm, yet VA as a clinical syndrome is uncommonly seen or reported.
Conclusion:
The clinician needs to be aware of coronary artery vasospasm as it can pose a serious medical threat. Early diagnosis and treatment may result in improved outcomes from vasospastic angina.
Urinary incontinence is a common and distressing condition. It is an underreported problem because of the stigma associated with the condition and many patients simply suffer in silence.
Definition
Urinary incontinence is defined as involuntary leakage of urine.
Prevelence
It has been estimated that in the United Kingdom (UK) 9.6 million women are affected by bladder problems.1, 2 An overactive bladder itself affects five million adults, nearly 1 in 5 of the over-40 population.3 Prevalence is estimated to be 15% among healthy older adults and 65% of old frail adults.4 It is twice as common in women than men. It can affect women of all ages including after childbirth. In a cross-sectional survey of adult females attending a primary care practice in the UK, nearly half had urinary incontinence but only a small minority sought help.5 Forty-two per cent of women affected wait up to 15 years before seeking treatment.6
Types
1. Stress incontinence: This is involuntary urine leakage on exertion such as coughing/laughing/sneezing or exercise. Stress incontinence is due to an incompetent urethral sphincter. It is largely caused by childbirth thus young women can develop this problem. Other causes include pelvic surgery or hysterectomy.
2.Urge incontinence:This is involuntary urinary leakage associated with urgency (a compelling desire to urinate that is difficult to defer) and is due to detrusor overactivity leading to detrusor contraction. Urge incontinence often appears later in life. Frequency or nocturia, with low volume of urine voided, are signs of an overactive bladder that can occur with or without urge incontinence.7 An overactive bladder affects both genders and its prevalence rises with age, affecting 16.7% of those aged 40 in North America and Europe.3 An overactive bladder should be managed in the same manner as urge incontinence.
3. Overflow incontinence
4. Mixed incontinence: This is both stress and urge incontinence.
Risk factors
The most important risk factor is being female. Others are:
Obesity
Pregnancy and childbirth
Obstruction - tumours in the pelvis or impacted stool
Hysterectomy 8
Neurological disease
Cognitive impairment
Burden
In 2001 the annual estimated cost of dealing with bladder problems was £353.6 million.9 This included expenditure on pads. It is expected to be much higher now. Only a small proportion of the above amount was spent on drugs,10 the remainder being spent on secondary care and surgical treatment.
Bearing this in mind, it makes sense that the general practitioner (GP) is ideally placed to screen and manage these patients in primary care. It is not necessary to refer all patients to secondary care. With the ever-increasing pressure on GPs to reduce unnecessary referrals, there is now a scope for commissioning this service. However, management of an overactive bladder is not part of the Quality and Outcome Framework - could be one reason why GPs are not keen or enthusiastic.
Primary care management
History
A good history makes the initial diagnosis. Ask the woman whether she leaks on coughing, sneezing or exertion (stress) or whether she has an urgent need to pass urine before the leakage (urge). If she gives a history of both, she probably has mixed incontinence.
A history of nocturia or frequency with low urinary volume means an overactive bladder. This should be managed in the same way as urge incontinence. Previous surgery, or an obstetric and gynaecology history, may give further clues as to the type of incontinence.
Examination
Abdominal examination - any palpable mass. This may be a palpable bladder, an ovarian cyst, or a large fibroid.
Pelvic examination - Prolapse, enlarged uterus due to fibroid. Inspection of the pelvic floor may show visible stress incontinence on straining or coughing.
Per-rectal (PR) examination if suspicion of constipation or faecal incontinence.
Investigations
Routine urine check for sugar and protein.
Mid-stream urine (MSU) to exclude urinary infection.
Bladder diary for three days. Ask the woman to complete a diary of time and fluid volume - intake and output with episodes of urinary leakage and her activity at that time. The charts are available from pharmaceutical companies (keep the booklets in your examination room).
National Institute for Health and Clinical Excellence (NICE) states that the use of cystometry, ambulatory urodynamics or video-urodynamics is not recommended before commencing non-surgical treatment.11
Treatment
Treatment depends on the type of incontinence. Pregnancy and childbirth are known risk factors and there is evidence that pelvic floor exercises during pregnancy reduce the risk. The exercises should be taught by the midwife during antenatal classes.
For stress incontinence, the first line therapy is three months of pelvic floor exercises. These should be taught by the practice nurse. An instruction leaflet on its own is not enough. There is good evidence that advising about pelvic floor exercises is an appropriate treatment for women with persistent postpartum urinary incontinence.12
For urge incontinence, bladder training is the first step. The patient should be taught to gradually increase the time between voids.
Life style advice in all with a body mass index (BMI) over 30kg/m2.11
Household modifications, mobility aids, downstairs toilets can help an elderly patient struggling to reach the toilet in time.
Regular prompting of patients, by residential or nursing home staff, to visit the toilet can make a considerable difference rather than putting a pad on.
Patients with an overactive bladder should be advised to reduce their caffeine and alcohol intake.
Encourage the patient to drink two litres of fluid a day. Many women reduce their fluid intake hoping that this would help the symptom control, but less fluid intake can lead to concentrated urine which can result in bladder irritation.
Antimuscarinic drugs such as oxybutynin can be used if bladder training is not successful. NICE recommends that immediate-release oxybutynin should be given as a first line.11 Transdermal oxybutynin can be given if oral oxybutynin is not tolerated. Compliance is often a problem because of side effects e.g. dry mouth, constipation, dry eyes, blurred vision, dizziness and cognitive impairment. Contraindications are acute angle glaucoma, myasthenia gravis, severe ulcerative colitis and gastro-intestinal obstruction.
NICE does not recommend duloxetine as a first or second line treatment for stress incontinence. It can be considered if there are persisting side effects with oxybutynin.
Desmopressin or tricyclic antidepressants can be used in women with nocturia.
The role of hormone replacement therapy (HRT) is debatable. Although oestrogens may improve atrophic vaginitis, there is no evidence that oestrogens by themselves are beneficial in incontinence.13
Pads and catheters should only be issued on prescription if all treatment options have failed and the patient is waiting to see a specialist. These are coping aids.
Referral to secondry care
GPs should refer patients to a urogynaecologist or a surgeon who has experience in this field. Extra-contractual referrals are not favoured by Primary Care Trusts (PCTs) - try convincing your PCT!
Refer if there is:
Pelvic mass
Frank haematuria
Symptomatic prolapse
Suspected neurological disease
Urogenital fistula
Previous pelvic surgery
Failure of conservative measures and anticholinergic drugs.
In 1940, Reid and Brace 1 first described the haemodynamic response to laryngoscopy and intubation due to noxious stimuli of the upper airway. Evidence from laboratory data demonstrates that epipharyngeal and laryngopharyngeal stimulation augments cervical sympathetic activity in the efferent fibres to the heart. This explains the increase in plasma levels of norepinephrine and, to a lesser extent, epinephrine, which occur during airway instrumentation 2. The rise in the pulse rate and blood pressure is usually transient occurring 30 seconds after intubation and lasting for less than 10 minutes 3. Usually these changes are well tolerated by healthy individuals. However, these changes may be fatal in patients with hypertension, coronary artery disease or intracranial hypertension 3. Numerous agents have therefore been utilised to blunt these stimulatory effects on the cardiovascular system induced by laryngoscopy and endotracheal intubation such as deepening of anaesthesia 3, pretreatment with vasodilators such as nitroglycerin 4, beta-blockers 5, and opioids 6 etc.
Lornoxicam is a nonsteroidal anti-inflammatory drug (NSAID) that belongs chemically to the oxicams and has been successfully used as a perioperative analgesic agent with a better safety profile regarding renal and hepatic function tests, in addition to better gastrointestinal tract tolerability compared to selective COX 2inhibitors 7. Riad and Moussa 8 reported that lornoxicam added to fentanyl attenuates the haemodynamic response to laryngoscopy and tracheal intubation in the elderly. Other than this, few data are available regarding the efficacy of lornoxicam in controlling the haemodynamic variations during the peri-intubation period. Therefore the present study was designed as a double-blind randomised placebo-controlled trial to investigate the effect of lornoxicam individually on the haemodynamic response and serum catecholamine levels following laryngoscopy and tracheal intubation.
Methods:
After obtaining the approval of the Hospital Research & Ethical Committee and patients' informed consent, fifty ASA I patients, aged 18-40 years, scheduled for elective surgical procedures under general anaesthesia requiring endotracheal intubation, were enrolled in this randomised, double-blinded placebo-controlled study. Those who had taken drugs that could influence haemodynamic and autonomic function, were excluded from the study. Further exclusion criteria consisted of patients with risk of pulmonary aspiration, predictably difficult airways or obesity (body mass index (BMI) > 30%) and patients with a known allergy to NSAIDs.
In a double-blind fashion and using a sealed envelope technique, patients were randomly allocated to one of two groups to receive intravenous injection (i.v.) of either Lornoxicam 16 mg diluted in 4 ml (Group L, n = 25) or placebo received saline 4 ml (Group S, n = 25) half an hour before induction of anaesthesia as the time taken by lornoxicam to reach peak plasma concentration (Tmax) was determined to be 0.5 h 9. Since lornoxicam is yellow while placebo is a clear fluid, syringes containing both solutions were prepared covered in a double blind fashion, by a collaborator not involved in data recording. The same collaborator administered drugs while a blind observer collected data.
Patients were not premedicated. In the holding area, an i.v. cannula was inserted and an i.v. infusion of Lactated Ringer’s 10 ml Kg-1 was started half an hour before induction of anaesthesia. Additionally, a 16-gauge i.v. catheter, attached to a stopcock and flushing device, was inserted into an antecubital vein of the contralateral arm to collect blood samples. Heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), mean arterial pressure (MAP) and arterial oxygen saturation (SpO2) were recorded before induction (baseline value).
After 3 min of pre-oxygenation, anaesthesia was induced with propofol 2.5mg kg-1 and cisatracurium 0.15 mg kg-1 to facilitate tracheal intubation which was performed using direct laryngoscopy when neuromuscular block was achieved by train of four-Guard monitor. SBP, DBP, MAP and HR were recorded before and after administration of the i.v. anaesthetic, immediately after intubation and cuff inflation, and every minute (min) for 10 mins. after intubation. All intubations were performed by a single anaesthetist, the duration of laryngoscopy and intubation were limited to the minimum possible time and were recorded. Data from patients in whom intubationrequired longer than 20 seconds (sec) were excluded.
Blood samples were drawn before (baseline) and 1 min. after intubation and cuff inflation for measurement of serum catecholamine concentrations. The samples were collected into pre-chilled tubes containing EDTA/Na and immediately centrifuged. Plasma concentrations of epinephrine and norepinephrine were measured in duplicate by using high-pressure liquid chromatography 10.
After tracheal intubation, patients were ventilated to normocapnia with sevoflurane (2-3% end tidal) in 50% oxygen in air. Two mins. after intubation (after collecting the blood sample), all patients received fentanyl i.v. 1.5 µg kg-1 and were monitored with ECG, SBP, DBP, MAP, SpO2 and end tidal carbon dioxide (EtCO2). All measurements were completed before skin incision. At the end of surgery, muscle relaxation was reversed and patients were extubated.
Statistical analysis was performed using SPSS version 17. Numerical data are presented as mean ± SD. Statistical comparisons among the groups were performed using unpaired t-test. Haemodynamic responses to induction and intubation in a given group were analysed using a paired t-test. The number of subjects enrolled was based on a power calculation of finding a 20% difference between the two groups in MAP and HR from the baseline values at alpha error of 0.05 and beta of 0.2. Categorical data were expressed as numbers and wereanalysed by using the 2 test where appropriate. A P value <0.05was considered statistically significant.
Results:
The two groups were comparable in demographic profile, duration of laryngoscopy and intubation as well as baseline haemodynamic parameters (table 1).
Table 1: Demographic, baseline haemodynamic characteristics and duration of laryngoscopy
Group S (Saline)
Group L (Lornoxicam)
No. of patients
25
25
Sex (female/male)
10/15
12/13
Age (yrs)
31.5 ± 5.6
33.1 ± 4.4
ASA (I/II)
19/6
20/5
Weight (Kg)
69.7 ± 4.2
66.9 ± 6.7
Height (cm)
167.9 ± 8.6
170.2 ± 4.5
Duration of laryngoscopy and intubation (sec)
14.9 (1.7)
16.2 (1.2)
HR/ minute
80.13±8.69
81.87±11.62
MAP mmHg
89.97±10.1
85.83±9.23
Systolic BP mmHg
120.2±11.2
117.44±17.1
Diastolic BP mmHg
78.7±9.91
73.13±12.42
(mean ± SD or number). No significant difference among groups
Table 2: Changes in Heart rate/minute
Group S
(Saline)
Group L
(Lornoxicam)
P
After induction
85.15±10.76
83.32±8.44
.062
0 minute after intubation
106±14.3
88.17±8.89
.000*
1 minute
101.71±11.15
86.92±9.11
.000*
2 minute
97.39±12.07
84.88±10.36
.019*
3 minute
95.48±12.95
81±9.91
.036*
Table 3: Changes in mean arterial pressure mmHg
Group S
(Saline)
Group L
(Lornoxicam)
P
After induction
84.65±8.3
79.77±9.92
.055
0 minute after intubation
129±16.54
91.73±10.7
.000*
1 minute
119.95±18.2
86.01±8.99
.000*
2 minute
105.33±13.15
83.62±10.63
.008*
3 minute
96.1±10.11
83.47±8.8
.024*
(mean ± SD). *P ≤ 0.05 is statistically significant change.
All tracheal intubations were performed successfully by the same anaesthetist at the first attempt. Following the induction of anaesthesia; SBP, DBP and MAP decreased in both groups (fig. 1 and 2).
After intubation the attenuation of the increase in SAP, DBP, MAP and HR in group L was statistically significant compared to group S, and then remained significant until 3 mins. after intubation. Haemodynamic variables are summarised in tables 2,3,4,5. The maximum rise in MAP and HR in group S at intubation was 30.5% and 42% respectively. While in group L the maximum rise in MAP and HR was 7.1% and 6.2% respectively over the entire observation period. After that, SBP, DBP, MAP and HR decreased gradually in both groups to values similar to those noted before induction. Furthermore, blood samples collected one minute following intubation showed a significant increase in serum epinephrine and norepinephrine concentrations in group S compared to group L in the same observation period (fig. 3) (table 6).
Table 4: Changes in systolic blood pressure mmHg
Group S
(Saline)
Group L
(Lornoxicam)
P
After induction
107.38±11.71
102.25±12.89
.069
0 minute after intubation
169.27±18.29
117.35±13.5
.0001*
1 minute
141.53±15.51
113.68±12.91
.005*
2 minute
128 ±11.2
115.39±14.17
.014*
3 minute
122.99±12.56
111.67±14.8
.037*
(mean ± SD). *P ≤ 0.05 is statistically significant change
Table 5: Changes in diastolic blood pressure mmHg
Group S
(Saline)
Group L
(Lornoxicam)
P
After induction
72.49±8.79
68.99±8.1
.085
0 minute after intubation
109.53±14.22
78.48±8.51
.000*
1 minute
92.18±10.63
74 ±7.75
.007*
2 minute
89.77 ±11.34
78.12±7.98
.02*
3 minute
81.45±8.8
73.6±8.21
.043*
(mean ± SD). *P ≤ 0.05 is statistically significant change
Table 6: Changes in serum catecholamine level nmol/L
Group S
(Saline)
Group L
(Lornoxicam)
P
Epinephrine
Pre intubation
.195±.119
.179±.104
.085
1 min postintubation
.206±.112
.181±.087
.038*
Norepinephrine
Pre intubation
1.11±.633
1.098±.51
.059
1 min postintubation
1.499±.903
1.107±.524
.000*
(mean ± SD). *P ≤ 0.05 is statistically significant change
Discussion:
Lornoxicam has been successfully used in prevention and treatment of postoperative pain 11. It was reported that i.v. 8 mg of lornoxicam was equianalgesic with 20 mg of morphine 12, 50 mg of pethidine 13, while 16 mg of lornoxicam had a superior analgesic effect compared with 100 mg of tramadol 14 and was comparable to 100 µg of fentanyl as intraoperative analgesia in mild to moderate day case ENT surgical procedures 15.
Our results showed a significant fall in SBP, DBP and MAP in both groups after induction. This might be due to the vasodilatation associated with the administration of propofol. Patients in both groups exhibited an increase in heart rate since no medicine other than Lornoxicam was added to propofol to decrease pain on injection. Propofol can cause significant tachycardia from pain in addition to reflex tachycardia due to a decrease in SVR. As the SBP, DBP and MAP rose significantly for the first 3 minutes after intubation in the control group, a further reduction in SVR due to the vasodilator effect of sevoflurane is the probable reason for the return of the MAP to nearly baseline values over the entire observation period. The fall in HR over the same period might be partly due to the bradycardia associated with fentanyl administered 2 minutes after intubation in both groups.
In our study, lornoxicam attenuated the pressor response to laryngoscopy and tracheal intubation; SBP, DBP, MAP and HR were significantly lower in L group compared to S group in the first 3 min after intubation. This may be attributable to the analgesic action of lornoxicam mediated through the antiprostaglandin effect of COX inhibition, the release of endogenous dynorphin and β-endorphin 14, a decrease in peripheral and central prostaglandin production, 16 as well as it exerting some of its analgesic activity via the central nervous system 17.
In agreement with our results, Bruder and colleagues 18 reported that laryngoscopy and intubation violate the patient's protective airway reflexes with marked reflex changes in the cardiovascular system and lead to an average increase in blood pressure by 40-50% and a 20% increase in heart rate. Kihara and colleagues 19, when comparing the haemodynamic response to direct laryngoscopy with the intubating laryngeal mask and the Trachlight device, reported that the HR increased compared with preoperative baseline values in all groups. Moreover, both systolic and diastolic pressure increased after tracheal intubation for 2 mins. with the highest values in the hypertensive group receiving direct laryngoscopy.
In a previous study done by Riad and Moussa 7, i.v. administration of 8 mg lornoxicam half an hour before surgery added to fentanyl 1 µg Kg-1 during induction of anaesthesia was found to attenuate the haemodynamic response to laryngoscopy and tracheal intubation in the elderly. However, it was unclear whether this was attributed to the drug's narcotic effect. Therefore, our study was designed to evaluate the use of lornoxicam individually, in a single i.v. administration of 16 mg lornoxicam half an hour before surgery. Lornoxicam 8 mg was not used as it was proven to have an inadequate analgesic effect 15.
There have been a few studies which have measured catecholamine levels after intubation. Our results are consistent with those of Russell et al 2 and Shribman et al 20 who reported significant elevations in serum levels of norepinephrine and epinephrine following laryngoscopy and tracheal intubation. Hassan and colleagues 21 concluded that during laryngoscopy and endotracheal intubation, placing the tube through the cords and inflating the cuff in the infraglottic region contributes significantly to sympathoadrenal response caused by supraglottic stimulation.
When assessing techniques to ameliorate the cardiovascular responses to intubation; the drugs used to induce anaesthesia may influence the results. We induced anaesthesia with propofol which produces hypotension. This may compensate in part for the cardiovascular changes attributable to laryngoscopy and tracheal intubation. This could be considered a limitation of the present study. The omission of opioids during the induction of anaesthesia in healthy young patients should not be a concern.
In conclusion, pretreatment with lornoxicam in the doses given in this study, attenuates the pressor response to laryngoscopy and the intubation of the trachea.
Carers play a vital role in supporting family members who are sick, infirm or disabled.1 There is no doubt that the families of those with mental disorders are affected by the condition of their near ones. Families not only provide practical help and personal care but also give emotional support to their relative with a mental disorder. Therefore the affected person is dependent on the carer, and their well-being is directly related to the nature and quality of the care provided by the carer. These demands can bring significant levels of stress for the carer and can affect their overall quality of life including work, socializing and relationships. Research into the impact of care-giving shows that one-third to one-half of carers suffer significant psychological distress and experience higher rates of mental ill health than the general population. Being a carer can raise difficult personal issues about duty, responsibility, adequacy and guilt.2 Caring for a relative with a mental health problem is not a static process since the needs of the care recipient alter as their condition changes. The role of the carer can be more demanding and difficult if the care recipient’s mental disorder is associated with behavioural problems or physical disability. Over the past few decades, research into the impact of care-giving has led to an improved understanding of this subject including the interventions that help. It has now been realized that developing constructive working relationships with carers, and considering their needs, is an essential part of service provision for people with mental disorders who require and receive care from their relatives.
The aim of this review was to examine the relationship between caring, psychological distress, and the factors that help caregivers successfully manage their role.
‘Family burden’ - The role of families as carers
Caring for someone with a mental disorder can affect the dynamics of a family. It takes up most of the carers’ time and energy. The family’s responsibility in providing care for people with mental disorders has increased in the past three decades. This has been mainly due to a trend towards community care and the de-institutionalization of psychiatric patients.3 This shift has resulted in the transferral of the day-to-day care of people with mental disorders to family members. Up to 90% of people with mental disorders live with relatives who provide them with long-term practical and emotional support.4, 5 Carer burden increases with more patient contact and when patients live with their families.6 Strong associations have been noted between burden (especially isolation, disappointment and emotional involvement), caregivers’ perceived health and sense of coherence, adjusted for age and relationship.7
‘Family burden’ has been adopted to identify the objective and subjective difficulties experienced by relatives of people with long-term mental disorders.8 Objective burden relates to the practical problems experienced by relatives such as the disruption of family relationships, constraints in social, leisure and work activities, financial difficulties, and negative impact on their own physical health. Subjective burden describes the psychological reactions which relatives experience, e.g. a feeling of loss, sadness, anxiety and embarrassment in social situations, the stress of coping with disturbing behaviours, and the frustration caused by changing relationships.9 Grief may also be involved. This may be grief for the loss of the person’s former personality, achievements and contributions, as well as the loss of family lifestyle.10 This grief can lead to unconscious hostility and anger.9,10
The impact of caring on carers’ mental health
The vehicles of psychological stress have been conceptualized as adjustment to change,11 daily hassles,12 and role strains.13 Lazarus and Folkman (1984)14 define stress as ‘a particular relationship between the person and the environment that is appraised by the person as taxing or exceeding his or her resources and endangering his or her well being.’ The association between feelings of burden and the overall caregiver role is well documented.15 Caregivers provide assistance with activities of daily living, emotional support to the patient, and dealing with incontinence, feeding, and mobility. Due to high burden and responsibilities, caregivers experience poorer self-reported health, engage in fewer health promotion actions than non-caregivers, and report lower life satisfaction.16, 17
The overarching theme from the findings is that carers and care recipients do not believe that care recipients’ basic needs are being met, which causes them a great deal of distress and anger towards services and increases carer burden. Carers assert that the needs of care recipients and carers are interconnected and should not be seen as separate.18 The stress in carers is best understood by Pearlin`s stress-process model as shown in Figure 1.
Figure 1: Pearlin’s stress–process model of stress in carers (adapted from Pearlin et al, 1990)
The burden and depressive symptoms sustained by carers have been the two most widely studied care-giving outcomes. Reports indicate that depressive symptoms are twice as common among caregivers than non-caregivers.19 Family caregivers who have significantly depressed mood may be adversely affected in their ability to perform desirable health-maintenance or self-care behaviours in response to symptoms.20 Family caregivers experience more physical and mental distress than non-caregivers in the same age group.16 Several studies suggest that many caregivers are at risk of experiencing clinical depression.21 Nearly half of the caregivers in some studies were reported to meet the diagnostic criteria for depression when structured clinical interviews were used.22 There is also some evidence to suggest that a diagnosis of depression can be causally related to the care-giving situation. Dura et al (1991)23 found that nearly one quarter of caregivers met the criteria for depression whilst in the care-giving role, although they had never been diagnosed with depression prior to their assumption of this role. It has been proven that if the problem behaviours and the functional impairment in the care recipients is worse, the strain score is higher and the carer is more likely to be depressed.24 The societal implications of this are underscored by reports indicating that the stressed caregiver is more likely to institutionalize the care recipient.25, 26
The impact of caring for different mental disorders
The impact of caring for different mental disorders, and associated risk factors, is shown in table 1. Although only limited data is available on the psychological distress experienced by the carers of people with other mental disorders, it seems that these disorders have a significant impact on families. Obsessive-compulsive disorder has a considerable impact on families and can lead to a reduction in social activities, causing isolation over time.38 People with obsessive-compulsive symptoms frequently involve their relatives in rituals.38 This can lead to an increase in anger and criticism towards them which has a negative impact on treatment outcomes.38 Caring for patients with eating disorders can be overwhelming for the carer. Available data suggest that the impact on carers of persons with anorexia nervosa may be even higher than for psychoses.39 Studies on bulimia nervosa indicate that carers have significant emotional and practical needs.40
Table 1: The impact of caring for different mental disorders and associated risk factors
Mental Disorder
Risk factors
Impact on the carer
Schizophrenia28
High disability, very severe symptoms, poor support from professionals, poor support from social networks, less practical social support, violence.
Guilt, loss, helplessness, fear, vulnerability, cumulative feelings of defeat, anxiety, resentment, and anger are commonly reported by caregivers.
Dementia 29,30
Decline in cognitive and functional status, behavioural disturbances, dependency on assistance.31
Anger, grief, loneliness and resentment.
Mood disorders
Symptoms, changes in family roles, cyclic nature of bipolar disorder, moderate or severe distress.32
Significant distress,33 marked difficulties in maintaining social and leisure activities, decrease in total family income, considerable strains in marital relationships.34, 35
Psychological consequences during critical periods also persisting in the intervals between episodes
in bipolar disorder,36 poorer physical health, limited activity, and greater health service utilization than non-caregivers.37
Table 2: Risk factors for carer psychological distress
Caregiver factors
Research findings
Gender
·Women have higher rates of depression than men in the care-giving role.42
·39% of female caregivers, compared to 16% of male caregivers, qualified as being at-risk for clinical depression on The Center for Epidemiologic Studies-Depression Scale (CES-D).43
·A randomized controlled trial44 found that women were more likely than men to comply with a home environmental modification intervention, implement recommended strategies, and derive greater benefits.
·Male carers tend to have more of a ‘managerial’ style that allows them to distance themselves from the stressful situation to some degree by delegating tasks.45
Age
·Age-associated impairments in physical competence make the provision of care more difficult for older caregivers.
·There is a positive association of age and caregiver burden in Whites, but a negative association for African-Americans suggesting that older African-Americans are less likely to experience care-giving as physically burdensome.46
Caregiver health
·Caregiver health has also been identified as a significant predictor of caregiver depression.46
·Poorer physical health among caregivers than age-matched peers. Such health problems are linked to an increased risk of depression.47
·Longitudinal studies demonstrated that caregivers are at a greater risk, than non-care-giving age-matched controls, for developing mild hypertension and have an increased tendency to develop a serious illness48 as well as increased risk for all-cause mortality.49
Ethnicity
·Ethnicity has substantial impact on the care-giving experience.41
·Comprehensive reviews of the literature have identified differences in the stress process, psychological outcomes, and service utilization among caregivers of different racial and ethnic backgrounds.50
·Studies consistently show important differences in perceived burden and depression among African-American, White, and Hispanic family caregivers.51
·Caucasian caregivers tend to report greater depression and appraise care-giving as more stressful than African-American caregivers.52
·Hispanic caregivers report greater depression and behavioural burden than Caucasians and African-Americans.53
Social support
·Social support has profound effects on caregiver outcomes.
·More social support corresponds to less depressive symptomatology47 and lower perceived burden.54
·Care-giving is associated with a decline in social support, and increased isolation and withdrawal. 55
·Social support and caregiver burden have been found to mediate depression in caregivers.55
·Social support has other important functions in that carers may find out about services from people who have used them before and form a network with others in similar situations.41
Factors associated with psychological distress of the carer
Risks for carer psychological distress or depression are related to gender, age, health status, ethnic and cultural affiliation, lack of social support, as well as certain other characteristics related to the caregiver (table 2).41
Some of the patient factors related to psychological distress in carers are: behavioural disturbances, functional impairments, physical impairments, cognitive impairments, and fear that their relative may attempt suicide.
The frequency with which behavioural disturbances are manifested by the patient has been identified as the strongest predictor of caregiver distress and plays a significant role in the caregivers decision to institutionalize the patient.25 The literature consistently demonstrates that the frequency of behavioural problems is a more reliable predictor of caregiver burden and depression than are the functional and cognitive impairments of the individual.56 Carers face unfamiliar and unpredictable situations which increases stress and anxiety. Anxiety may be increased by behavioural problems of patients who cannot be successfully managed on a consistent basis.56 Anxiety is associated with depression, stress, and physical ill health.56
Findings regarding the relationship of functional impairment and negative caregiver outcomes have been inconclusive. Some studies document a weak association of objective measures of patient functional status and caregiver burden/depression,57 whereas others report a stronger relationship.54 Carers have reported great anxiety due to fear that their relative may attempt suicide.58 Carers of people with both physical and cognitive impairments have higher scores for objective burden of caring than those caring for people with either type of impairment alone. 58 In contrast, scores for limitations on their own lives were higher among women caring for people with cognitive impairments (with or without physical impairments).59
Coping styles and interventions to reduce psychological distress in carers
There is increasing interest in examining the factors that help caregivers successfully manage their role, while minimizing the effect on their mood and general well-being.60 Much of this research has been done within the general framework of stress and coping theory,61 examining coping styles of caregivers and the relationship between types of coping styles and reported symptoms of depression.62 A variety of interventions have been developed which support caregivers (table 3). Interventions include: training and education programs, information-technology based support, and formal approaches to planning care which take into account the specific needs of carers, sometimes using specially designated nurses or other members of the health care team.63
Ballard et al (1995)64 demonstrates that a higher level of carer education regarding dementia increases carers’ feelings of competency. This is more likely to reduce their expectations of their dependents’ abilities. Previous studies which have looked at these coping strategies and feelings of competence have shown that unrealistic expectations of a dependant increases carers’ risk of depression,65 and conversely a reduction of carers’ expectations is associated with lower rates of depression.66 Caregivers who maintain positive feelings towards their relative have a greater level of commitment to caring and a lower level of perceived strain.67 Furthermore, carers who experience feelings of powerlessness, lack of control, and unpreparedness have higher levels of depression.65 The most effective treatments in depression of carers appear to be a combination of education and emotional support.68
Spiritual support can also be considered a coping resource and has been studied in older African-Americans and older Mexican-Americans.69 Previous work examining the role of spiritual support observed that African-American caregivers report higher spiritual rewards for caregiving,70 and reliance on prayer and church support.71
Religious coping plays a paramount role, and it is often present at higher levels for African-Americans and Hispanics. For REACH caregivers, Coon et al (2004)72 found that religious coping is greater for Hispanic and African-American than for White caregivers. Religious involvement is frequently associated with more access to social support as well.73
Anecdotal literature74 suggests that caregivers who use more active coping strategies, such as problem solving, experience fewer symptoms of depression than do those who rely on more passive methods. Significant associations have been reported between positive strategies for managing disturbed behaviour, active strategies for managing the meaning of the illness, and reduced levels of caregiver depression. An important role for health-care professionals is in helping caregivers enhance their coping skills, supporting existing skills, and facilitating the development of new ones.66
Table 3: Coping styles and interventions to reduce psychological distress in carers
An important role for health-care professionals is in helping caregivers enhance their coping skills, supporting existing skills and facilitating the development of new ones.
· Training and education programs
· Information-technology based support
· Formal approaches to planning care
· Combination of education and emotional support
· Spiritual support
· Religious coping
· Positive strategies for managing disturbed behaviour
· High quality of informal relationships and presence of informal support
· Psychotherapy
· Cognitive-behavioural family intervention
Care-giving has some positive associations for caregivers, including pride in fulfilling spousal responsibilities, enhanced closeness with a care receiver, and satisfaction with one's competence.75 These perceived uplifts of care-giving are associated with lower levels of caregiver burden and depression.76 However perceived uplifts are more common among caregivers of colour than among Whites.77
High quality of informal relationships, and the presence of informal support, is related to lower caregiver depression78 and less deterioration in the emotional health for African-American caregivers, but not for Whites.79 Support of caregivers by others help to alleviate stress if the supporter is understanding and empathic.74 In one study, caring for a family member was not perceived to be a burden, and caregivers reported notable limitations on their social networks and social activities. They reported higher levels of unemployment than would be expected for the general population and were over-represented in lower income groups. Family carers are at high risk of social and economic disadvantage and at high risk of mental health challenges.80 Highly stressed persons may not be able to benefit from attempted social support of others as much as moderately stressed persons.81
Caregivers need to have the opportunity to learn more effective ways of coping with stress. If they can learn new ways to cope, they can reduce their anxiety and reliance on treatments.41 Bourgeois et al (1997)82 report that caregiver’s behavioural skills and effective self-management training programmes result in a lower frequency of patient behavioural problems and helps to improve the caregiver’s mood. Stevens and Burgio (2000)83 designed a caregiver intervention that teaches caregivers behavioural management skills to address problem behaviours exhibited by individuals with dementia, as well as problem-solving strategies to increase pleasant activities for the caregiver. Passive coping styles have been associated with greater burden. Persons who use an escape-avoidance type of coping are known to have more depression and interpersonal conflicts.41
Psychotherapy may be of some benefit in patients with early dementia but, due to cognitive loss, some adaptation of the technique is required and the involvement of carers is often necessary.84 Cognitive-behavioural family intervention can have significant benefits in carers of patients with dementia and has a positive impact on patient behaviour.85 From a cognitive perspective, care-giving plays an important invisible part, which consists of interpreting the care receiver's behaviour, reflecting on the best way to adjust to it, and defining care objectives.86 The interventions requiring active participation by the caregivers and those based on cognitive behavioural therapy can produce significant reductions in burden, anxiety and depression than those focused on knowledge acquisition.87
Among caregivers with depressive symptoms, 19% used antidepressants, 23% antianxiety drugs, and 2% sedative hypnotics. African-American caregivers were less likely than Whites to be taking antidepressants.88 In their study, Kales et al (2004)89 reported use of herbal products in 18% of elderly subjects with depression and/or dementia and in 16% of their caregivers.
In the Burdz et al (1988)90 study, respite care proved to have a positive effect on the burden experienced by the caregivers, and it also had a positive effect, against all expectations, on the cognitive and physical functioning of the persons with dementia.
There are more than twenty instruments that could be used as outcome measures with mental health carers and have good psychometric properties. They can measure (i) carers' well-being, (ii) the experience of care-giving and (iii) carers' needs for professional support.91 The caregiver burden scale and the sense of coherence scale seem to be highly useful for identifying carers at risk of stress, the pattern of burden, and coping strategies. Nurses can help family caregivers to identify their negative experiences about care-giving and can help them reflect upon their coping strategies to find balance in their situation. Risk groups of caregivers may be identified, especially those with a low perceived health and sense of coherence, for early interventions to reduce burden.7
Conclusion
The impact of caring for someone with mental illness brings the risks of mental ill health to the carer in the form of emotional stress, depressive symptoms, or clinical depression. Most individuals with mental disorders live in their own homes and are cared for by a family member. The caring process can be very taxing and exhausting, especially if the care recipient has a severe mental disorder. Providing such long-term care can be a source of significant stress. The behavioural problems associated with mental disorders further increase the stress levels of the carer and therefore impacts significantly on their mental health.
Carers face mental ill health as a direct consequence of their caring role and experience higher rates of mental ill health than the general population. This leads to negative effects on the quality of life of the carer and the standard of care delivered. Efforts to identify and treat caregiver psychological distress will need to be multidisciplinary, require consideration of the cultural context of the patient and caregiver, and focus on multiple risk factors simultaneously. The findings of the review underline the importance for early identification of carers, effective carer support, health promotion, monitoring high-risk groups, and timing appropriate interventions.
As highlighted in the World Health Report 2002, just a few Non Communicable Disease (NCD) risk factors, account for the majority of non communicable disease burden. These risk factors; tobacco use, alcohol consumption, raised blood pressure, raised lipid levels, increased BMI, low fruit/vegetable intake, physical inactivity, and diabetes, are the focus of the STEPs approach to NCD risk factor surveillance. 3
A tool for surveillance of risk factors, WHO STEPS, has been developed to help low and middle income countries get started. It is based on collection ofstandardised data from representative populations of specified sample size to ensure comparability over time and across locations. Step one gathers information on risk factors that can be obtained from the general population by questionnaire. This includes information on socio-demographic features, tobacco use, alcohol consumption, physical inactivity, and fruit/vegetable intake. Step two includes objective data by simple physical measurements needed to examine risk factors that are physiologic attributes of the human body. These are height, weight, and waist circumference (for obesity) and blood pressure. Step three carries the objective measurements of physiologic attributes one step further with the inclusion of blood samples for measuring lipid and glucose levels.4 The risk factors studied by MONICA project of the World Health Organization (WHO), included cigarette smoking, blood pressure, blood cholesterol and body weight.5In many resource-poor settings, laboratory access can be difficultand expensive. A screening algorithm that includesgender, age, cardiovascular disease history, blood pressure,weight and height, and a urine dipstick test for glucose andprotein is likely to be more practical and may well providemuch of the predictive value of more complex blood-based assessments.6In addition, such algorithms should, wherever possible, useregional data on morbidity and mortality, because backgroundrates vary considerably between regions.7 WHO/ISH (World Health Organization/International Society of hypertension) risk prediction charts provide approximate estimates of cardiovascular disease (CVD) risk in people who do not have established coronary heart disease, stroke or other atherosclerotic disease. They are useful as tools to help identify those at high cardio vascular risk, and to motivate patients, particularly to change behavior and, when appropriate, to take antihypertensive, lipid-lowering drugs and aspirin.8 After reviewing the above information about standardised methods available for identifying the risk factors for CAD, the present study was undertaken to assess the prevalence of risk factors in the community in Tripoli, the capital of Libya. The aim of this paper also includes suggesting priorities and strategy to deal with the risk factors that were found most important. Appropriate statistical tests were applied using the software SPSS 17 for determining the relative importance of different risk factors. The specific statistical tests have been stated below. Material and Methods 528 individuals were selected from general community for the study by random sampling from different geographical areas of Tripoli. They were interviewed about risk factors for CAD and where possible, facts stated by them were validated from medical records available with them. Their body weight, height and blood pressure were also recorded. The intern doctors posted with community medicine department were briefed and trained by faculty members for the above observations and recording the body measurement and blood pressure using the uniform technique. The WHO/ISH risk prediction colourcharts for Eastern Mediterranean Region B (which includes Libya) were used as questionnaire for the study. The option of charts available for settings where blood cholesterol can’t be measured was selected as it was found difficult to convince the individuals not suffering from disease to provide blood samples. The following criteria were used for defining Blood Pressure, BMI, Diabetes & MI : According to the WHO definition, individuals with systolic blood pressure ≥ 140 mmHg or those with diastolic blood pressure ≥ 90 mmHg were considered hypertensive. 21. Known cases of diabetes were termed as individuals for whom the diagnosis of diabetes had been established by a physician in the past, or those who were under treatment with anti diabetic drugs. 22 Body mass index (BMI) is calculated as weight divided by height squared (kg/m2). Overweight is defined as BMI 25–29.9 kg/m2, and obesity as BMI ≥ 30 kg/m2 for all subjects.19 Known cases of Myocardial Infarction (MI) were termed as individuals for whom the diagnosis of MI had been established by a physician in the past. ObservationsThe comparison of population characteristics of people with and without having a MI stated in the table below reveals that: distribution of males and females was similar in both the groups. 88% of individuals with a MI were from age group 35 and above. Whereas 11.43% of people with MI were from age group 15 to 34 years which shows the need of starting screening as well as control of risk factors from teenage. Using SPSS software, independent sample t test was applied on age distribution of individuals with and without history of MI. The result revealed that the mean age of individuals with a positive history of MI was 54. It was 43.74 for subjects with negative history of MI. The difference of age between the above 2 groups was found highly significant (P>0.001). In the same manner using SPSS software, Chi square test was applied on sex distribution of individuals with and without history of MI. The result revealed that the difference in sex distribution in the two groups was not significant (P = 0.522) Table 1 : Age & Sex wise distribution of persons with and without MI:
Characteristics
Individuals with MI in percentage, (N= 70)
Individuals without MI in percentage, (N = 458)
Sex; Male
68.57 (48)
68.78 (315)
Female
31.43 (22)
31.22 (143)
Age 15-34 years
11.43(8)
34.93 (160)
35-54 years
30.00 (21)
37.55 (172)
55 & above
58.57 (41)
27.51 (126)
Independent risk factorsAs presented in Fig.1, in males with MI in terms of percentage the most prevalent risk factor was found to be hypertension (11.05% higher than non MI group), followed by diabetes (higher by 10.78%), smoking (higher by 8.12%) and BMI 25 & above (higher by 5.13%). As presented in Fig.2, in females with MI in terms of percentage the most prevalent risk factor was found to be hypertension (20.55% higher than non MI group), followed by BMI 25 & above (higher by 8.77%) and diabetes (higher by 6.85%). There were no smokers in the female group with MI and only one smoker was found in the females without MI. Using SPSS software, under general linear model, multivariate analysis was performed after splitting the cases under male and female. History of MI was kept as fixed factor and age, history of hypertension, diabetes and stroke, smoking, systolic blood pressure and BMI were kept as dependent variables. The results reveal that in the males with positive history of MI, value of P was less than 0.001(highly significant) for age, History of hypertension & diabetes and systolic BP of140 and greater, followed by history of stroke (P>0.002) suggesting that prevalence of these variables were significantly higher in males with history of MI. The prevalence of BMI 25 & above (P>0.616) and smoking (P>0.882) in males with history of MI was found insignificant. In case of females with positive history of MI, the only variable having significant prevalence was f history of hypertension (P>0.008). An important reason for inability to assess significance for other variables in females may be the smaller number of females of only 22 with history of MI. Among the community members with MI, 94.38% males and 78.57% females had one or the other risk factor which have been stated above. Hence with focused attention to health education and screening for risk factors, identifying most of the individuals at risk of MI, should be possible. (Fig.1) Distribution of Risk Factors in Males with and without MI (The total number of responses are more than number of respondents because of more than one risk factor being present in many respondents) (Fig.2) Distribution of Risk Factors in Females with and without MI (The total number of responses are more than number of respondents because of more than one risk factor being present in many respondents) Combination of risk factorsOut of 48 males with MI, 22 (45.83%) had both diabetes and hypertension and half of them (22.92%) were also smokers. The next group among males having multiple risk factors were that of smokers 14 (29.17%), out of which half (14.58%) also had hypertension. Out of 22 females with MI, 13 (59.09%) had hypertension and 27.27 % out of them were also diabetic. The next group was that of diabetics 3 (13.64%). Hence looking at the combination of risk factors in both males and females with MI the most common risk factor in terms of prevalence was found to be hypertension followed by smoking in men and diabetes in women. As Hypertension and BMI in age group of 35 to 54 years were found to be significant and commonly present risk factors, the data was further explored. Systolic BP 140 and above: The percentage of persons with MI having a systolic BP of 140 and above in the age group 35 to 54 years was more than double in comparison to the percentage expected by number of persons present in this age group that is 66.67% as stated in Fig.3, against 30% as stated above in Table1. Hence in this age group there appears to be considerable opportunity of detecting and treating cases of hypertension in the general community before they reach to the advanced stage of coronary artery disease and MI. (Fig.3) Age wise distribution of blood pressure (both sexes)Body Mass Index: As presented below in Fig.4, the percentage of overweight and obese individuals were found to be 5 to 9 percent higher in those with MI than those without MI. The percentage of obese people increased by 2 times in both the groups that is with and without MI as age advanced to 35-54 years from 15-34 years. The percentage of overweight individuals was 1.48 times in those without MI and 1.77 times in those with MI in age group 35-54 years in comparison to the age group of 15-34 years. (Fig. 4) Age wise distribution of weight (both sexes) Discussion: Comparison with other relevant studies: In our study the most common risk factors observed in community members without MI were hypertension (total 24.35%, males 23.78 & females 25.88), followed by diabetes (total 21.13%, males 19.56 & females 25.29) and smoking (Total 27.26%, males 37.33 & females 0.59) as stated above in Fig.1 & 2. In similar studies performed in countries of Mediterranean region14-18 26% of study population were found to be suffering from hypertension, 40% males and 13% females were smokers and 14.5% were suffering from diabetes. 13 The percentage of diabetics was 10.6 in study population aged 30 years and above in Iran11. The percentage of diabetics were 11% in males and 7% in females in United Arab Emirates (UAE)10 and the figures were the similar in Saudi Arabia in subjects aged 30 years and above were 17.3% and 12.18% respectively.9. All the above studies were performed in the period from year 2000 to 2004 except the study in UAE which was performed in 1995. It can be seen from our study in Libya that in comparison to mean percentage for the same risk factors in other countries of Mediterranean Region, the percentage of hypertension was lower by about 2%. In Libya the percentage of total diabetics in the general community was greater by 6.6%, while the percentage of smokers were less by about 13% in males and 12.5% in females. The percentage of total overweight and obese individuals in all age groups and both sexes were 66.6 % in the general community without MI in our present study (Fig,4). The percentage for those overweight and obese in individuals above 19 years of age was 26.2% in study from Iran12 and 27 % in UAE10 in the age group of 30 to 64 years. The study of 12 countries of the Eastern Mediterranean Region(EMR) by the WHO conducted in 2004, reveals that regional adjusted mean for these countries was 43 % for overweight and obese individuals in all age groups and both sexes20. Hence in comparison to developing countries of the region having similar religious, social and dietary situation among the risk factors for CAD, diabetes and obesity can be seen as emerging major risk factors in Libya followed by hypertension and smoking. Smokers among females were found to be uncommon in Libya.Conclusion The findings of this study reveal that in comparison to those without MI the prevalence of following risk factors was higher in individuals with MI. In males aged 35 to 54, the percentage of those with a systolic BP of 140 and greater was more than double and in females 1.6 times greater. Those with diabetes were greater by 10.78% in males and 6.85% in females, while smokers were higher by 8.12% in males. The percentage of diabetes in individuals without MI was 21.13%. The prevalence of smokers was found to be 37.33% in males without MI which suggests urgent need for prevention and control measures. Considering multiple risk factors out of 48 males with MI, 22 (45.83%) had both diabetes and hypertension and half of them (22.92%) were also smokers. Out of 22 females with MI, 13 (59.09%) had hypertension and 27.27 % out of them were also diabetic. In view of large number of individuals having risk factors of CAD in Tripoli, we would like to recommend that health education for preventing overweight and obesity, hypertension, smoking and diabetes may be started with school children and their parents as early as primary school. The screening for above risk factors needs to be implemented in the age group of 34 years and above for detecting individuals at risk as close to 34 years as possible. This step needs to be followed by relevant health education and treatment as soon as possible. More studies on a larger population sample are required from different geographical areas of Libya to refine our focus on the target population identified. At the same time waiting for action, till these additional studies are completed, is not recommended. To make the comparison of risk factors more fruitful among different countries and in the same country over time, we need to agree on uniform criteria such as using WHO/ISH risk prediction charts. Limitations of present studyIt is a cross sectional study based on the questions stated in WHO/ISH prediction charts for situations where collecting blood samples is not feasible. Due to the small sample size we can only say that the prevalence of MI is indicative of the pattern observed. These figures may get refined as we cover a larger number of the population over time. Due care has been taken in selecting sample size to represent different geographical divisions of Tripoli and to ensure that this is a random sample, but it is a systematic random sample and not the stratified random sample. Hence within each geographical division all the socio economic strata of community may not have been proportionately represented. AppendixThe questionnaire used for the study is stated below. It is based on the questionnaire recommended on page 21 of WHO/ ISH risk prediction charts for Eastern Mediterranean Region B of W.H.O. in which Libya is included.QuestionnairePrecautions: Do not interview persons below the age of 14 years. You should take height, weight and Blood Pressure of the person yourself, before recording it in the form below
S.N.
Question
Subject
1
2
3
4
5
1
Name of Person:
2
Address in Libya
4
Age
5
Sex: M / F
6
Do you smoke: Yes / No
8
Do you have History of suffering from Diabetes: Yes / No
9
Hist. of suffering from: Mayo cardial Infarction: Yes/ No
Antibiotics serve a very useful therapeutic purpose in eradicating pathogens1,2. Unfortunately excessive and inappropriate use of antibiotics results in antibiotic resistance which is a rapidly increasing global problem with a strong impact on morbidity and mortality 3-5. It is now evident that self-medication is widely practiced in both developing 6-11, as well as developed countries 12-18. India is also experiencing this problem of inappropriate use of self-medications in significant numbers 19,20.
Unlike the rest of the population, when physicians become ill,they can prescribe medicines for themselves very easily. Medical knowledgeand access to prescription of medications increase the potentialfor self-treatment. Although many warn of the loss of objectivitythat can accompany self-prescription, previous studies suggest that self-prescription is common among practicing physicians 21-24. The purpose of the present studyis to evaluate self-prescription and self-care practices amonggovernment doctors in the Hassan District of Karnataka. Materials and methods
A cross section of doctors attending the CME programme at Hassan Institute of Medical Sciences, Hassan, was selected for the project during August 2009. A self –assessment questionnaire was distributed amongst the participants after explaining the purpose of the study and after taking informed oral consent. The study was given prior approval from the institutional ethics committee. A total of 160 doctors (all participants were male) were chosen randomly for participation in the study.
The questionnaire consisted of both closed and open-ended questions. A total of 21 questions were stated concerning the following: Socio-demographic characteristics (like age, sex and personal habits), patterns of self – medication with antibiotics (e.g. type of antibiotics used, frequency, whether the course of antibiotic was completed, and the health condition that lead to self-medication).
After completion of data collection, it was reviewed, organized and evaluated using the Chi-square test and analysis of variance (One-way ANOVA) using the Statistical Package of Social Science (SPSS Inc., Chicago, IL) for windows version 14 and p-value of <0.05 was considered statistically significant. Results
A total of 160 male doctors agreed to participate in the study. Twenty eight percent of them were postgraduate qualified (e.g. MD, MS in different specialities) and 72% were only MBBS qualified. Eighty six percent of them were aged between 36-45 years.
Fifty three percent of doctors had used self-prescribed antibiotics with self-diagnosis within the last 6 months before the study.
Table – 1
Characteristics of the Respondents
Variables
Doctors %
Used self-medication with antibiotics
53.0
How many times
Once / day
55.8
Twice / day
10.4
> 3 times
16.1
Completed the course
26.8
Table – 2
Factors that lead to Self-medication
Conditions
Doctors %
Respiratory Infections
66.7
GI problems
23.4
Systemic Problems
7.7
Skin Problems
2.6
Urinary tract conditions
0
Table – 3
Antibiotics used for self-medication
Name of the antibiotic
Doctors %
Penicillines
68.0
Amoxicillin
40.0
Flouroquinolones
13.3
Co-amoxiclav
6.8
Macrolides
8.0
Tetracyclines
2.7
Cephalosporins
4.0
Sulphonamides
2.2
Metronidazole
1.2
Tinidazole
2.0
The frequency of antibiotic use was once in 55.8%, twice in 10.4% and thrice or more in 16.1% in the study period (p < 0.05). Only 26.8% of all doctors attended in this study completed the course of antibiotic therapy (p < 0.05) (Table 1).
The factors that lead to self-medication among respondents were perceived respiratory infections in 66.7%, gastrointestinal diseases in 23.4%, systemic diseases in 7.7% and skin diseases in 2.6% (Table 2).
Table 3 shows the antibiotics that were most frequently used for self-medication. Penicillins were ranked highest (68%) and in this group Amoxicillin was most frequently used (40%). Next were the flouroquinolones with 13.3% followed by Macrolides 8%. Other relatively lesser used drugs were co-amoxiclav, cephalosporins, tetracyclines, sulphonamides, Tinidazole and Metronidazole. Discussion
The current study examined antibiotic self-medication among government medical doctors in Hassan district. They were attending a CME programme at HIMS, Hassan. Studies on factors associated with antibiotic use are important to prevent the occurrence of antibiotic resistance 9, which is a well known problem in many countries 7-18. Antibiotic use in different diseases was always empirical without proper opinion and laboratory investigation in self- medication.
The source of the antibiotics was from medical representatives (47.8%), from drug stores (44.8%) without prescription, even though antibiotics are prescription only medicines. The fact that the violation of this law is subject to financial penalty and is not strictly implemented in case of doctors, has resulted in the continuation of this practice. Self-medication with antibiotics may increase the risk of inappropriate use and the selection of resistant bacterial strains 25,26. There have been several reports addressing the extent of self-medication practices with antibiotics among university students in other countries 27,28, but few about doctors. This should be further analyzed.
In this study, more than 53% of the respondents practiced self-medication with antibiotics within the last 6 months before the study. This rate is similar to the findings of a study in Turkey with 45.8% of self-medication with antibiotics 29 and also a recent study in Jordan (40.7%) 9 and other studies in Sudan (48%) 7, Lithuania (39.9% ) 30 and also in USA ( 43% ) 17.
Higher rates of self-medication are reported from China(59.4%) and Greece (74.6%) 14. Lower rates are reported from Palestinian students (19.9) 27, Mexico (5%) 31, Malta (19.2%) 18 and Finland (28%) 12. It seems that the lower rates of self-medication in these cases were due to respiratory diseases being treated symptomatically rather that with antibiotics.
Only 26.8% of respondents completed the course of antibiotic therapy. This is similar to the result of study in Jordan (37.6%) 9.
The most common disease treated by antibiotics was respiratory tract infections (common cold, sore throat, and sinusitis). Such diseases were also reported to be the common cause for self-medicated in Jordan 9, Palestine 27, Turkey 28 and European countries 16. The above conditions are known to be of viral origin 32, requiring no antibiotic treatment.
The main antibiotics used in self-medication were penicillins in general, and particularly Amoxicillin. Similar results are reported by other studies from different parts of the world 8,33. This may be due to the low cost of broad spectrum penicillin throughout the world 8.
It is agreed by some researchers that adverse effects due to inadequate and inappropriate use of antibiotics without prescription can be minimized by proper education 34. This can be effectively done through national awareness programmes, educational programmes (Rational Drug Use, Intensive Medical Monitoring of Prescription, evidence based practice, and essential drug use) and CME programmes.
We also suggest specific education about antibiotics in all educational and research institutions.
There are a few limitations in this study for all doctors irrespective of gender. First, is its reliance on self-reported dataabout self-medication with antibiotics. Secondly, it refers to any previous use of self-medication with antibiotics (retrospective study). Another limitation is that our population samplemay not be representative of the doctors’ population in the entire district. National education programmes about the dangers of irrational antibiotic use and restriction of antibiotics without prescriptions should be the priority. This study indicated that with reference to doctors, knowledge regarding antibiotics cannot be evaluated alone since it did not always correlate with behaviour. Conclusion
Almost all medical doctors practice self-treatment when they are ill. Although they prefer to be treated by a physician, due to complex reasons including ego and a busy professional work pattern, there is a certain amount of hesitation in consulting professional colleagues when they need medical help.
The prevalence of self medication practices is alarmingly high in the medical profession, despite the majority knowing that it is incorrect. We recommend that a holistic approach must be taken to prevent this problem from escalating, which would involve: (i) awareness and education regarding the implications of self medication (ii) strategies to prevent the supply of medicines without prescription by pharmacies (iii) strict rules regarding pharmaceutical advertising; and (iv) strategies to make receiving health care much less difficult.
Our study has also opened gateways for further research on this issue, besides showing that it is a real problem and should not be ignored in and around Karnataka, India and all over the world.
Acute Lung Injury (ALI) is a continuum of clinical and radiographic changes affecting the lungs, characterised by acute onset severe hypoxaemia, not related to left atrial hypertension, occurring at any age. At the severe end of this spectrum lies Acute Respiratory Distress Syndrome (ARDS) and therefore unless specifically mentioned this review will address ARDS within the syndrome of ALI.
It was first described by Ashbaugh in the Lancet in 1967. This landmark paper described a group of 12 patients with “Respiratory Distress Syndrome” who had refractory hypoxaemia, decreased lung compliance, diffuse infiltrates on chest radiography and required positive end expiratory pressure (PEEP) for ventilation.1
Key Points on Acute Lung Injury
Common, life threatening condition which is a continuum of respiratory dysfunction with ALI and ARDS being at either end of the spectrum
Risk factors include conditions causing direct and indirect lung injury, leading to an inflammatory response which can cause multiple organ failure
Damage to alveolar epithelial cells and capillary vasculature impair gas exchange and can lead to fibrosis
Management aims include supportive care, maintaining oxygenation and diagnosing and treating the underlying cause
Evidence supports low tidal volume ventilation and conservative fluid management
Long term outcomes relate to neuromuscular, neurocognitive and psychological problems rather than pulmonary dysfunction
This initial description gave only vague criteria for diagnosis, focused on the most severe end of the continuum and was not specific enough to exclude other conditions. A more precise definition was described by Murray et al. in 1988 using a 4 point lung injury scoring system including the level of PEEP used in ventilation, ratio of arterial oxygen tension to fraction of inspired oxygen (PaO₂/FiO₂), static lung compliance and chest radiography changes2. Despite being more specific and assessing severity it was too large and complex for practical purposes in the ICU setting.
It was not until 1994 that The American –European Consensus Conference on ARDS set the criteria used today to define both ALI and ARDS in research and clinical medicine. It recommended ALI be defined as “a syndrome of inflammation and increased permeability that is associated with a constellation of clinical, radiological and physiological abnormalities that cannot be explained by, but may coexist with, left atrial or pulmonary capillary hypertension” .3 They distinguished between ALI and ARDS based upon the degree of hypoxaemia present, as determined by the ratio of partial pressure of arterial oxygen to fractional inspired oxygen concentration (PaO₂/FiO₂), with ALI patients demonstrating a milder level of hypoxaemia. Additionally ARDS changed from Adult Respiratory Distress Syndrome to Acute Respiratory Distress Syndrome to account for its occurrence at all ages.
DIAGNOSIS AND PROBLEMS RELATED TO THIS
There are no gold standard radiological, laboratory or pathological tests to diagnosis ALI and ARDS and patients are given the diagnosis based on meeting the criteria agreed in 1994. (See Table 1)
ALI is diagnosed clinically and radiologically by the presence of non-cardiogenic pulmonary oedema and respiratory failure in the critically ill.
Table 1 – Diagnostic Criteria for ALI and ARDS
ALI
ARDS
Onset
Acute
Acute
Oxygenation (PaO₂/FiO₂) ratio in mmHg, regardless of ventilatory settings
<300
<200
Chest Radiological Appearance
Bilateral Pulmonary Infiltrations which may or may not be symmetrical
Bilateral Pulmonary Infiltrations which may or may not be symmetrical
Pulmonary Wedge Pressure
(in mmHg)
<18 or no clinical evidence of left atrial hypertension
<18 or no clinical evidence of left atrial hypertension
Meeting criteria, in itself, is not a problem when diagnosing conditions in the ICU setting, as sepsis and multi-organ failure are defined using consensus based syndrome definitions, however there are problems specifically related to ALI’s diagnosis.
In practice ALI and ARDS are clinically under-diagnosed, with reported rates ranging between 20 to 48% of actual cases.4 This is due to poor reliability of the criteria related to;
Non-specific radiological findings which are subject to inter-observer variability
Oxygenation criteria is independent of inspired oxygen concentration or ventilator settings including lung volumes and PEEP
Excluding cardiac causes of pulmonary oedema including left ventricular failure, mitral regurgitation and cardiogenic shock, in the ICU setting is difficult even when pulmonary artery catheters are used
The definition includes a heterogeneous population who behave very differently in response to treatment, duration of mechanical ventilation and severity of pulmonary dysfunction.
However this is the definition used by the ARDS network (a clinical network set up in 1994 by The National Heart, Lung and Blood Institute and the National Institutes of Health in the USA) for its clinical trials and on this basis it is validated.
EPIDEMIOLOGY
Incidence
Incidence of ALI is reported as 17-34 per 100,000 person years.5 Unfortunately despite population studies demonstrating fairly consistent trends regarding age (mean approximately 60years), mortality (35-40%) and ratio of ARDS to ALI (around 70%), incidence figures are less consistent internationally. A recent prospective population-based cohort study in a single US county demonstrated a higher incidence around 78.9 per 100,000 person years and inferred from this that 190,600 cases could occur in the USA alone each year.6 This variation is likely due to problems with reliability of diagnosis as illustrated above and also related to ALI generally presenting as a critical care illness making its epidemiology directly linked to availability of ICU resources.
Cases are only “captured” in the ICU setting and it potentially exists outside this environment in unknown quantities.7 Taking this into account means ALI and ARDS are probably far commoner in clinical practice than reported and many patients may meet the diagnosis yet be managed outside the ICU environment.8
Risk Factors
ALI is a multi-factorial process which occurs due to environmental triggers occurring in genetically predisposed individuals, as ALI-inducing events are common, yet only a fraction of those exposed develop the syndrome.
Environmental triggers for developing ALI can be divided into those causing direct and those causing indirect lung injury, with sepsis, either intrapulmonary or extrapulmonary being the commonest cause. (See table 2)
Table 2 Direct and Indirect triggers for ALI
Direct Lung Injury
Indirect Lung Injury
Common
Pneumonia
Aspiration of gastric contents
Less Common
Pulmonary contusion
Fat / Amniotic fluid embolism
High Altitude
Near Drowning
Inhalation Injury
Reperfusion Injury
Common
Sepsis
Severe trauma with shock and multiple transfusions
Less Common
Burns
Disseminated intravascular coagulation
Cardiopulmonary bypass
Drug overdose (heroin, barbiturates)
Acute pancreatitis
Transfusion of blood products
Hypoproteinaemia
At present there is research into the role of genetic factors and how they contribute to susceptibility and prognosis.9 It is difficult to assess the molecular basis of ALI due to the range of ALI inducing events which can cause the lung injury, the heterogeneous nature of the syndrome itself, presence of additional comorbidities, potentially incomplete gene penetrance and complex gene-environment interactions. However possible candidate genes which predispose patients to ALI have been identified and other genes exist which may influence its severity, thus providing targets for research in treatment development.
Secondary factors including chronic alcohol abuse, chronic lung disease and low serum pH may increase risk of developing ALI.⁷ There may be factors which are protective against its development, such as diabetes in septic shock patients,10 but further research is required.
PATHOPHYSIOLOGY
It is thought ALI patients follow a similar pathophysiological process independent of the aetiology. This occurs in two phases; acute and resolution, with a possible third fibrotic phase occurring in a proportion of patients.
Acute Phase
This is characterised by alveolar flooding with protein rich fluid secondary to a loss of integrity of the normal alveolar capillary base, with a heterogeneous pattern of alveolar involvement.
There are two types of alveolar epithelial cells (Table 3), both of which are damaged in ALI, likely via neutrophil mediation, with macrophages secreting pro-inflammatory cytokines, oxidants, proteases, leucotrienes and platelet activating factor.
Table 3 Characteristics of Type I and Type II Alveolar Epithelial Cells
Type I
Type II
Percentage of cells
90%
10%
Shape
Flat
Cuboidal
Function
Provide lining for alveoli
Replace damaged type I cells by differentiation
Produce surfactant
Transport ions and fluids
Damage to type I alveolar epithelial cells causes disruption to alveolar-capillary barrier integrity and allows lung interstitial fluid, proteins, neutrophils, red blood cells and fibroblasts to leak into the alveoli.
Damage to type II cells decreases surfactant production and that produced is of low quality, likely to be inactivated by fluid now in alveoli, which leads to atelectasis. Additionally there is impaired replacement of type I alveolar epithelial cells and an inability to transport ions and therefore remove fluid from the alveoli.
Coagulation abnormalities occur including abnormal fibrinolysis and formation of platelet and fibrin rich thrombi which result in microvascular occlusion, causing intrapulmonary shunting leading to hypoxaemia.
Ventilation-perfusion mismatch, secondary to alveolar collapse and flooding, decreases the number of individual alveoli ventilated, which in turn increases alveolar dead space, leading to hypercapnia and respiratory acidosis. Additionally pulmonary compliance decreases and patients start to hyperventilate in an attempt to compensate the above changes.
The release of inflammatory mediators from damaged lung tissue triggers systemic inflammation and systemic inflammatory response syndrome (SIRS) which may progress to multiple organ failure, a leading cause of death in ARDS patients.
Resolution Phase
This phase is dependent on repair of alveolar epithelium and clearance of pulmonary oedema and removal of proteins from alveolar space.
The type II alveolar epithelial cells proliferate across the alveolar basement membrane and then differentiate into type I cells. Fluid is removed by initial movement of sodium ions out of the alveoli via active transport in type II alveolar epithelial cells, with water then following, down a concentration gradient through channels in the type I alveolar epithelial cells.
Soluble proteins are removed by diffusion and non soluble proteins by endocytosis and transcytosis of type I alveolar epithelial cells and phagocytosis by macrophages.
Fibrotic Phase
Some patients do not undergo the resolution phase but progress to fibrosing alveolitis, with fibrosis being present at autopsy in 55% non-survivors of ARDS.11 This occurs by the alveolar spaces filling with inflammatory cells, blood vessels and abnormal and excessive deposition of extracellular matrix proteins especially collagen fibres.12 Interstitial and alveolar fibrosis develops, with an associated decrease in pulmonary compliance and only partial resolution of pulmonary oedema with continued hypoxaemia.
CLINICAL FEATURES
Acute Phase
The diagnosis should be considered in all patients with risk factors who present with respiratory failure, as the onset though usually over 12 to 72 hours, can be as rapid as 6 hours in presence of sepsis.
Patients present with acute respiratory failure where hypoxaemia is resistant to oxygen therapy and chest auscultation reveals diffuse, fine crepitations, indistinguishable from pulmonary oedema.
Resolution Phase
This phase usually occurs after around 7 days after onset of ALI, where a resolution of hypoxaemia and improvement in lung compliance is seen.
Fibrotic Phase
There is persistent impairment of gas exchange and decreased compliance. In severe cases it can progress to pulmonary hypertension through damage to pulmonary capillaries and even severe right heart failure, with the signs and symptoms of this developing over time.
INVESTIGATIONS
Diagnostic criteria require arterial blood gas analysis to demonstrate the required ratio between the partial pressure of arterial oxygen and fractional inspired oxygen concentration.
Radiological Findings
Although there are no pathognomonic radiographic findings for ALI, features on plain chest radiography include;
Bilateral patchy consolidation, which may or may not be symmetrical
Normal vascular pedical width
Air bronchograms
Pleural effusion may be present
10-15% patients have pneumothoraces independent of ventilator settings
Computer tomography of the chest can show the heterogeneous nature of ALI, with dependent areas of the lung showing patchy consolidation with air bronchograms, atelectasis and fibrosis. As with plain radiography there may be pneumothoraces present.
Computer tomography and Chest radiograph of ARDS
MANAGEMENT
The aims of management are to provide good supportive care, maintain oxygenation and to diagnose and treat the underlying cause.
General
Good supportive care, as for all ICU patients, should include nutritional support with an aim for early enteral feeding, good glycaemic control and deep venous thrombosis and stress ulceration prophylaxis. It is important to identify and treat any underlying infections with antibiotics targeted at culture sensitivities and if unavailable, towards common organisms specific to infection site.
It is not uncommon for ALI patients to die from uncontrolled infection rather than primary respiratory failure.
Ventilator associated pneumonia is common in patients with ALI and can be difficult to diagnoses, as ALI radiological findings can mask new consolidation and raised white cell count and pyrexia may already be present. If suspected this should be treated with appropriate antibiotics, although long term ventilation can cause colonisation which leads to endotracheal aspirate culture results being difficult to interpret.
Although the role of physiotherapy in ALI is unclear, aims of treatment should be similar to those in all ICU patients, including removal of retained secretions and encouragement of active and passive movements, as patients are often bed bound for prolonged periods of time.
Ventilation
MODE OF VENTILATION
Ventilation is usually via endotracheal intubation using intermittent positive pressure ventilation with PEEP. There may be a role for non invasive ventilation in early stages of ALI, but it is poorly tolerated at higher PEEP settings which may be required to maintain oxygenation, and no evidence supports its use at present. Additionally there is no evidence to suggest an advantage of either volume or pressure controlled ventilation.
Principles of ventilation in ALI are to maintain adequate gas exchange until cell damage resolves whilst avoiding ventilator associated injury from;
Barotrauma – alveolar overdistension associated with ventilation at high volumes
Volutrauma – alveolar overdistension associated with ventilator high pressures
Biotrauma – repeated opening and closing of collapsed alveoli causing shearing stress which can initiate a proinflammatory process
Lungs in patients with ALI are heterogeneous and therefore can react variably to changes in ventilator settings. Therefore settings which provide adequate oxygenation, may damage more “healthy” areas of lung.13
Table 4 Lung ventilation in different parts of lung with acute lung injury
Area
Characteristics
Behaviour when ventilated
1
Normal compliance and gas exchange
Easily over ventilated
Exposed to potential damage
2
Alveolar flooding and atelectasis
Alveoli can still be recruited for gas exchange by safely raising airway pressures
3
Severe alveolar flooding and inflammation
Alveoli cannot be recruited without using unsafe airway pressures
TIDAL VOLUMES
Old strategies of high volume ventilation are likely to over inflate healthy lung portions leading to barotrauma and ventilator management in ALI has moved towards lower tidal volumes. This is a consequence of the ARDSnet tidal volume study, which demonstrated significant reduction in mortality (40 to 31%) when using a low volume ventilator strategy based on predicted body weight (6mls/kg and peak pressures <30cmH₂O vs. 12mls/kg and peak pressures <50cmH₂O).14 Furthermore they showed a decrease in systemic inflammatory markers, lower incidence of multiple organ failure and an increase in ventilator free days in the lower tidal volume group.
PEEP
It was postulated that PEEP may be beneficial in ARDS as it reduces biotrauma, maintains the patency of injured alveoli, reduces intrapulmonary shunting and improves ventilation-perfusion mismatch. However evidence regarding its use is inconclusive. Numerous large centre trials have demonstrated no difference in outcome or mortality between patients ventilated with lower PEEP vs. higher PEEP (8 vs. 14 cm H₂O).151617 Yet a recent JAMA systematic review and meta-analysis showed that although higher PEEP ventilation was not associated with improved hospital survival, it was associated with improved survival among the ARDS subgroup of ALI and suggested that an optimal level of PEEP remains unestablished but may be beneficial.18
ECMO (Extracorporeal membrane oxygenation)
This is a modified longer term form of cardiopulmonary bypass which aims to provide gas exchange across an artificial membrane external to the body, allowing the lungs time to recover. It is confined to a few specialist centres in the UK and the first results from the CESAR multicentre randomised controlled trial were published in the Lancet in 2009. It showed improved survival in adult patients with severe but potentially reversible respiratory failure on ECMO, as compared to conventional ventilation and demonstrated cost effectiveness in settings like the UK healthcare system.19 This therefore may be a treatment strategy to consider in extreme cases resistant to conventional therapy.
OTHER STRATEGIES
A current meta-analysis looking at prone positioning concluded that randomised controlled trials failed to demonstrate improved outcomes in ARDS patients overall. There is a decrease in absolute mortality in severely hypoxaemic patients with ARDS but as long term proning can expose ALI patients to unnecessary complications, it should only be used as rescue therapy for individuals resistant to conventional treatment.20
No evidence supporting specific weaning programmes exists and a recent Cochrane review showed no evidence to support recruitment manoeuvres in ALI. 21
Therefore the aim of ventilation is low volumes with permissive hypercapnia, providing adequate oxygenation (regarded as a partial pressure of arterial oxygen >8kPa) whilst trying to avoid oxygen toxicity lung injury.
Fluid Management
Fluid management has to balance the need for enough fluid to maintain an adequate cardiac output and end organ perfusion, with a low enough intravascular pressure to prevent high capillary hydrostatic pressures, which could cause pulmonary oedema, worsen oxygen uptake and carbon dioxide excretion. Evidence supports a negative fluid balance in patients not requiring fluid for shock.
Studies as early as 1990 showed a reduction in pulmonary wedge pressure was associated with increased survival22 and extravascular lung water was associated with poor outcomes23 in ARDS patients.
The ARDSnet FACTT study looked at two fluid regimens comparing liberal fluid management (a net gain of approximately 1 litre per day) with a conservative fluid management (zero net gain over first seven days).24 Although there was no significant difference in (the) primary outcome of 60 day mortality, the conservative management group had improved lung function, shortened duration of mechanical ventilation and intensive care and had no increased incidence of shock or use of renal replacement. This is supported by a recent retrospective review, which concluded negative cumulative fluid balance at day 4 of acute lung injury is associated with significantly lower mortality, independent of other measures of severity of illness.25
Pharmacotherapy
To date no pharmacological agent has been demonstrated to reduce mortality among patients with ALI.26 However ALI encompasses a wide range of patients with varying aetiology and comorbidities. It may be that on subdividing ALI patients, some therapies may be suitable for specific circumstances but at present there is little literature to support this.
EXOGENOUS SURFACTANT
Since the 1980’s numerous randomised controlled trials have demonstrated no benefit from synthetic, natural or recombinant surfactant use in adults with ALI.
INHALED NITRIC OXIDE
Despite providing selective vasodilatation and improving ventilation perfusion mismatch, trials have only showed short lived improvement in oxygenation and no change in mortality with nitric oxide use. At present it plays no role in standard ALI treatment and should be reserved for rescue therapy in patients difficult to oxygenate.27
STEROIDS
Despite the potential for steroids to benefit ALI patients due to anti-inflammatory properties, clinical trials demonstrate no improved mortality when given early or late in disease progression and given concerns regarding their role in development of neuromuscular disorders associated with critical illness, a recent large randomised controlled trial argued against steroid use in ALI.28
INTRAVENOUS SALBUTAMOL
Beta 2 agonists were shown to be experimentally beneficial in ALI due to increasing fluid clearance from alveolar space, anti-inflammatory properties and bronchodilation.29 The BALTI trial published in 2006, investigated the effects of intravenous salbutamol in patients with ARDS. It showed decreased lung water at day 7, lowered Murray lung injury scores and lower end expiratory plateau pressures but an increase in incidence of supraventricular tachycardias and therefore further investigation is needed before it can be recommended as treatment for ALI.30 The BALTI-2 trial is currently underway in the UK, to further assess possible benefits and complications.
Other new and promising treatments which are currently being evaluated in trials are activated protein C and granulocyte-macrophage colony-stimulating factor (GM-CSF).
MORTALITY
Mortality rates of patients with ALI and ARDS are similar, with both being around 35-40%.³ Controversy exists regarding whether mortality rates in ALI are decreasing,31 or have stayed static.32 Nonetheless death in patients with ALI is rarely from unsupportable hypoxaemic respiratory failure but from complications of the underlying predisposing conditions or multiple organ failure.33
There is some evidence related to racial and gender differences in mortality (worse in African Americans and males)34 and that thin patients have increased mortality and obese patients have somewhat lower mortality than normal weight individuals35 but the main independent risk factors for increased mortality are shown in Table 5.
Table 5 Independent risk factors for increased mortality in ALI as identified in multicentre epidemiological cohorts
·Old age
·Worse physiological severity of illness
·Shock, on admission to hospital
·Shorter stay in the ICU after ALI onset
·Longer hospital stay before ALI onset
·Increased opacity on chest radiography
·Immunosupression
OUTCOMES
Long term problems are related to neuromuscular, neurocognitive and psychological dysfunction rather than pulmonary dysfunction. (Table 6) There is poor understanding of the mechanisms which cause these sequelae and therefore prevention of these outcomes and planning rehabilitation can be difficult.
Table 6 Long Term Outcomes in ARDS survivors and caregivers
Neuromuscular dysfunction
·critical illness polyneuropathy
·critical illness myopathy
·entrapment neuropathy
Neurocognitive dysfunction involving
·memory
·executive function
·attention
·concentration
Psychological dysfunction
·Post traumatic stress disorder
·Depression
·Anxiety
Other
·Pulmonary dysfunction
·Tracheostomy site complications
·Striae
·Frozen joints
Caregiver and financial burden
A recent study into patients who survived ALI showed they require support during discharge from ICU to other hospital settings and again once in the community regarding guidance on home care, secondary prevention and support groups.36
CONCLUSION
The syndrome which encompasses ALI and ARDS is common and under-recognised, with many clinicians encountering it outside the ICU setting. Despite advances in identification and management, morbidity and mortality is still high. Care should focus on supportive treatment and managing the underlying cause, whilst specifically aiming for low volume ventilation and conservative fluid balance. Ongoing research is still needed to hone the diagnostic criteria, define genetic risk factors and develop new treatment strategies to improve outcome. The new challenge for clinicians is how to address the long term outcomes of survivors and their relatives which will be an increasingly important problem in the future.²⁶
Initial reports of Obesity Hypoventilation Syndrome (OHS) date back as early as 18891, but it was not until 1955 that Auchincloss2 and colleagues described a case of obesity and hypersomnolence paired with alveolar hypoventilation. Burwell3 coined the term Pickwickian syndrome describing the constellation of morbid obesity, plethora, oedema and hypersomnolence. Hypercapnia, hypoxaemia and polycythemia were described on laboratory testing. Obstructive Sleep Apnea (OSA) had not been described at that time and came to be recognized for the first time in the mid 1970s. With attention shifting to upper airway obstruction, hypercapnia began to get lesser emphasis and confusion began to emerge in describing OSA and OHS. The term ‘Pickwickian’ began to be used for OSA-related hypersomnolence in the obese patient regardless of the presence of hypercapnia. This confusion was finally settled by the American Academy of Sleep Medicine (AASM) in its published guidelines in 1999.4 The AASM statement identified that awake hypercapnia may be due to a predominant upper airway obstruction (OSA) or predominant hypoventilation (Sleep Hypoventilation Syndrome) easily distinguished by nocturnal polysomnography (PSG) and response to treatment. Both disorders are invariably associated with obesity and share a common clinical presentation profile.
Salient features of OHS consist of obesity as defined by a BMI > 30kg/m2, sleep disordered breathing, and chronic daytime alveolar hypoventilation (PaCO2 ≥ 45 mmHg and PaO2 < 70 mmHg). 4 Sleep disordered breathing, as characterized by polysomnography in OHS, reveals OSA (Apnea-hypopnea index [AHI]>5) in up to 90% of patients and sleep hypoventilation (AHI<5) in up to 10%.5 Daytime hypercapnia and hypoxaemia are the hallmark signs of OHS and distinguish obesity hypoventilation from OSA. Severe obstructive or restrictive lung disease, chest wall deformities and hypoventilation from severe hypothyroidism, and neuromuscular disease need to be excluded before a diagnosis of OHS is established. As obesity is becoming more prevalent in western society, this disorder has gained more recognition in recent years. However, patients with this syndrome may still go undetected and untreated. No population-based prevalence studies of OHS exist till date but, at present, can be estimated from the relatively well known prevalence of OHS among patients with OSA. Recent meta-analysis with the largest cohort of patients (n=4250) reported a 19% prevalence of OHS among the OSA population, confirming an overall prevalence of about 3 per 1000.6
Whilst transient rectifiable nocturnal hypercapnia is common in patients with OSA, awake hypercapnia in OHS appears to be a final expression of multiple factors. There has been a debate about BMI and AHI not being the most important independent predictors of hypercapnia in obese patients with OSA. More definitive evidence for the role of OSA, however, is suggested by resolution of hypercapnia in the majority of patients with hypercapnic OSA or OHS with treatment, with either PAP or tracheostomy, without any significant changes in body weight or respiratory system mechanics. Yet some recent studies have shown that nocturnal hypoxaemia and diurnal hypercapnia, persist in about 50% of such individuals even after complete resolution of OSA with CPAP or tracheostomy. This raises questions such as how good is AHI as a measure of severity of OSA?
It is intuitive to argue that obesity may exert its effect through mass loading of CO2 due to (increased production via) higher basal metabolic rate or reduced functional residual capacity on lung function. But why do only some severely obese patients with OSA go onto develop OHS? Is the pathophysiology driven by the severity of BMI? Whilst weight loss, particularly surgically-induced, clearly shows resolution of both OSA and hypercapnia7, the role of BMI as an independent factor for hypercapnia has been challenged by the fact that only a small fraction of severely obese patients do in fact develop chronic diurnal hypercapnia. More importantly, not only can PaCO2 be normalized in a majority of patients without weight loss and with positive airway pressure therapy (PAP), but awake hypercapnia can develop even at lower BMIs among the Asian population. Some investigators have tried to explain the incremental role of BMI as follows. In situations where AHI is not a presumed independent predictor of nocturnal hypercapnia, potential pathophysiologic contributors can include pre-event (apnea or hypopnea) amplitude in relation to the post-event amplitude.8 Such inciting events for nocturnal hypercapnia may then be perpetuated in the daytime by factors such as AHI, functional vital capacity (FVC), FVC/FEV1, or BMI as shown in the largest pooled data to date.6 It has been shown that, for a given apnea/interapnea duration ratio, a greater degree of obesity is associated with higher values of PaCO2.9 However the same group of investigators, in another study, did not find any of these factors to be related to the post-event ventilatory response.8
Looking further at the breath by breath cycle, the post-event ventilatory response in chronic hypercapnia may relate to eventual adaptation of chemoreceptors perhaps in consequence to elevated serum bicarbonate known to blunt the ventilatory drive.10 Or it may relate to whole body CO2 storage capacity which is known to exceed the capacity for storing O2.11 With definite evolution in our understanding of hypercapnia among obese patients, these questions continue to dominate. Some of the more pressing ones include: are the predictors of daytime hypercapnia different from those of nocturnal hypercapnia in obese patients with OSA? An understanding of these facts can help us with the more important understanding of the associated morbidity and mortality from OHS and its correct management. In addition, what is the true effect of untreated OHS on mortality independent of the co-morbidities related to obesity and OSA? Can morbidities like cor pulmonale and pulmonary hypertension be reversed with treatment of OHS? How do we treat patients with OHS who fail CPAP/ BiPAP short of tracheostomy?
David Kingdom is a Professor of Mental Health Care Delivery at University of Southampton and Honorary Consultant Psychiatrist to Hampshire Partnership Foundation Trust.
How long have you been working in your speciality?
30 years
Which aspect of your work do you find most satisfying?
Clinical work can be very stimulating but so can research particularly when you feel, rightly or wrongly, that you have contributed something original which can benefit patients.
What achievements are you most proud of in your medical career?
Developing cognitive behaviour therapy for people with psychosis and then seeing it gradually becoming part of accepted practice in many parts of the world.
Which part of your job do you enjoy the least?
Doing reports and filling in forms.
What are your views about the current status of medical training in your country and what do you think needs to change?
Generally I think there have been many positive developments of it especially in improving the interaction between patients, health care staff and doctors but there is still a real problem with conveying the importance of psychological aspects.
How would you encourage more medical students into entering your speciality?
I would like to see psychology being increasingly accepted as a relevant qualification on a par with other sciences.
What qualities do you think a good trainee should possess?
Intelligence and warmth.
What is the most important advice you could offer to a new trainee?
Spend as much time learning from patients and their carers as you can.
What qualities do you think a good trainer should possess?
Intelligence and warmth.
Do you think doctors are over-regulated compared with other professions?
No, although revalidation may be going that way.
Is there any aspect of current health policies in your country that are de-professionalising doctors? If yes what should be done to counter this trend?
No – we need to maximise the efficiency of our work and this will mean gradual change in roles of ourselves and others.
Which scientific paper/publication has influenced you the most?
‘Not made of wood’ by Jan Foudraine, a Dutch psychiatrist who spent time listening to patients in long-stay hospitals and drawing out the extraordinary stories of their lives.
What single area of medical research in your speciality should be given priority?
Psychological treatments for currently treatment resistant conditions.
What is the most challenging area in your speciality that needs further development?
Classification of mental disorders.
Which changes would substantially improve the quality of healthcare in your country?
Introduction of effective care pathways which are linked directly to outcome measurement and funding contingent on these.
Do you think doctors can make a valuable contribution to healthcare management? If so how?
Yes – by seeing that clinically effective interventions are made available to those who can benefit from them.
How has the political environment affected your work?
Funding has improved over the past decade but is now looking much more uncertain.
What are your interests outside of work?
Family, sailing & watching Southampton FC.
If you were not a doctor, what would you do?
Law probably as it also involves work with people and is a steady job.
Immediate postoperative care of patients undergoing nasal surgery, e.g. septoplasty or rhinoplasty, could be hazardous as desaturation happens frequently especially if the patient is not fully recovered struggling for nasal breathing while the nose is packed with gauze.1,2 Moreover, ice may be applied to the nose in the operating room to decrease swelling, and an external splint could be taped by the surgeon onto the patient’s face.3 All make it difficult to apply and fit a Hudson recovery face mask in the post-anaesthesia care unit (PACU) to maintain adequate oxygenation.
Figure 1
Facing this problem, we prepared an oral oxygenating airway device, to maintain an open unblocked airway in addition to adequate oxygenation, in the early recovery period for patients undergoing nasal surgery. Our device (Fig 1,2) is an oral airway size 4 or 5 with a siliconised soft endotracheal tube (ETT) size 5.5 mm fixed alongside the airway with its bevel directed laterally to provide easy insertion of the airway. The distal end of the ETT is cut 4-5 cm from the airway to be connected to a breathing circuit through a 15 mm connector or connected directly to tubing of oxygen flow-meter supplying humidified oxygen at a low flow rate of 1-2 L/minute to provide FIO2 35-40%. This device was tried successfully in 54 patients scheduled to septoplasty and rhinoplasty.
Figure 2
In conclusion, this device is simple, cheap, easily inserted, efficiently maintains adequate arterial oxygen saturation as long as the oral airway is tolerated in the early recovery period, reduces the oxygen flow rate and, in addition, an oxygen analyzer can be connected to the 15 mm connector to provide monitoring of the delivered FIO2.
Forty one years after the last influenza pandemic, while everyone was worrying about the avian influenza A (H5N1) virus causing a pandemic, an apparent new chapter is opened with the emergence of new strain of influenza A virus. On 24th April, the World Health Organization (WHO) declared the first ever public health emergency of international concern indicating the occurrence of confirmed human cases of swine influenza in Mexico and United States.1 Subsequently the Centre for Disease Control and Prevention (CDC) confirmed that these human influenza cases were caused by a novel strain of influenza A virus to which there is little or no population immunity.2 On June 2009, the WHO rated the pandemic alert from phase 5 to 6, signalling that the first pandemic of the 21st century was underway. It was however stressed that the rise in the pandemic alert level was mainly attributed to the global spread of the virus rather than its severity. The pandemic potential of influenza A viruses has been ascribed to their genetic and antigenic instability and there ability to transform by constant genetic re-assortment or mutations, which can result in the emergence of novel progeny subtypes capable of both infecting and leading to sustained person to person transmission.3 The newly emerged strain contains a combination of gene segments that have not been previously identified in swine or human influenza viruses.4
Historical Perspectives
Influenza has been recognised for hundreds of years, but the cause was unknown for most of this time. Hippocrates had defined this disease about 2400 years ago, but lacked laboratory confirmation.5 The year 1580, marks the first instance of influenza recorded as an epidemic even though there is possibility that there were many prior influenza epidemics.6 The word influenza (meaning influence), first used in 1743 originated from the Latin word “Influenza”, named so because the disease was considered to be caused by unfavourable astrological conditions. Since 1700, there have been approximately a dozen influenza A virus pandemics and the lethal outbreak of 1918-1919 is dubbed as the greatest medical holocaust in recorded history, killing up to 50 million people worldwide.7
The earliest evidence of influenza A virus causing acute respiratory illness in pigs was traced to the 1930s. Swine influenza A viruses are antigenically very similar to the 1918 human influenza A virus and they may all have originated from common ancestor.8 From 1930 to 1990, classic swine influenza A was the commonest swine influenza virus circulating amongst the swine population during which the virus did not undergo much genetic change. Antigenic variants of these classical influenza viruses emerged in 1991 and the real antigenic shift occurred at the ends of last century when the classical swine influenza virus re-assorted with human influenza A virus and a North American lineage avian influenza virus. This resulted in the emergence of multiple subtypes including H1N2 and H3N2. In the past few years, sporadic cases of human infections caused by swine influenza A virus have occurred, mainly due to subtypes. Occupational exposure to swine was the most important risk factor for infection and fortunately all patients recovered without resulting in efficient, sustained human to human transmission.9
Origin of 2009 Strain
The pandemic that began in March 2009, was originally referred to as “swine flu” because laboratory testing showed that many of the genes in this new virus were very similar to influenza viruses that normally occur in pigs (swine) in North America. But further study has shown that this new strain of virus represents a quadruple re-assortment of two swine strains, one human strain and one avian strain of influenza. The largest proportion of genes come from swine influenza viruses (30.6% from North American swine influenza strains, and 17.5% from Eurasian swine influenza strains), followed by North American avian influenza strains (34.4%) and human influenza strains (17.5%).10 Analysis of the antigenic and genetic characteristics of the pandemic influenza A virus demonstrated that it’s gene segments have been circulating for many years, suggesting that lack of surveillance in swine is the reason that this strain had not been recognized previously.11 This novel strain is antigenically distinct from seasonal influenza A and possesses previously unrecognised molecular determinants that could be responsible for the rapid human to human transmission. Moreover, antigenic drift has occurred amongst different lineages of viruses, therefore, cross protection antibodies against avian, swine and human viruses are not expected to exist. Emerging scientific data support the hypothesis of a natural genesis, with domestic pigs a central role in the generation and maintenance of the virus. Protein homology analysis of more than 400 protein sequences from the new influenza virus as well as other homologous proteins from influenza viruses of the past few seasons also confirmed that this virus has a swine lineage.1 Phylogenetic analysis has suggested that initial transmission to humans occurred several months before the recognition of the outbreak and multiple genetic ancestry of this influenza A is not indicative of artificial origin.11
Situation Update
In March 2009, an outbreak of respiratory illness was first noted in Mexico, which was eventually identified as being related to influenza A.12 The outbreak spread rapidly to the United States, Canada and throughout the world as a result of airline travel.13 On 11th June 2009, the WHO raised its pandemic alert to the highest level i.e. phase 6, indicating widespread community transmission on at least two continents.14
Pandemic influenza was the predominant influenza virus circulating in the US, Europe, northern and eastern Africa and in Australia. Activity of the virus has initially peaked and then declined in North America and in parts of western, northern and Eastern Europe, but activity continued to increase in parts of central and southeastern Europe, as well as in central and south Asia. As of 28th February 2010, worldwide more than 213 countries and overseas territories or communities have reported laboratory confirmed cases of pandemic influenza 2009, including at least 16455 deaths; a number the WHO acknowledges significantly underreported the actual number.15 Most of the deaths have been related to respiratory failure resulting from severe pneumonia and acute respiratory distress syndrome.16
In India, the number of confirmed cases till March 2010 was 29,953 and a total of 1410 deaths were reported. The rate of infection has been highest among children and young individuals of <24 years of age. To date, pandemic influenza A infections are uncommon in persons older than 65 years, possibly as a result of pre-existing immunity against antigenically similar influenza viruses that circulated prior to 1957.17 High rates of morbidity and mortality has been noted among children and young adults with underlying health problems including chronic lung disease, immunosuppressive conditions, cardiac disease, pregnancy, diabetes mellitus and obesity.18
Transmission and Shedding
Novel virus is contagious and can transmit from human to human in ways similar to other influenza viruses. The main route of transmission between humans is via inhalation of infected respiratory droplets (range in size from 0.08 µm to 0.12 µm) produced after coughing and sneezing.19 Transmission via contact with surfaces that have been contaminated with respiratory droplets or by aerosolised small-particle droplets may also occur. In addition to respiratory secretions, all other body fluids (including diarrhoeal stool) should also be considered potentially infectious.
The estimated incubation period is unknown and could range from 1 to 7 days, although the median incubation period in most cases appears to be approximately 2 days.20 Shedding of the virus begins the day prior to the onset of symptoms and can persist for 5-7 days in immunocompetent individuals. The amount of virus shed is greatest during the first 2-3 days of illness. Persons who continue to be ill, for a period of longer than 7 days after illness onset, should be considered potentially contagious until symptoms have resolved. Longer periods of shedding may occur in children (especially young infants), elderly adults, and patients with chronic illnesses and immunocompromised hosts who might be contagious for longer periods.
Clinical Manifestations
According to the CDC, in humans the symptoms of the 2009 “flu” virus are similar to those of influenza and of influenza-like illness in general. The illness with the virus has ranged from mild to severe and symptoms include fever, cough, sore throat, body aches, headache, chills and fatigue, which are usual features of influenza virus. The 2009 outbreak has shown an increase percentage of patients reporting diarrhoea and vomiting.16 As these symptoms are not specific to swine flu hence a differential diagnosis of probable swine flu requires not only symptoms but also a high likelihood of swine flu due to person’s recent history. The CDC advised physicians to consider swine influenza infection in the differential diagnosis of patients with acute febrile respiratory illness who have either been in contact with persons with confirmed swine flu or who were in states that have reported swine flu cases during the 7 days preceding their illness onset.
The overall severity with this 2009 virus has been less than what was observed during the influenza pandemic of 1918-1919. Most patients appear to have uncomplicated, typical influenza-like illness and recovered without requiring any medical treatment. About 70% of people who have been hospitalised have had one or more medical conditions, which include pregnancy, diabetes, heart disease, asthma and kidney disease.21 The most common cause of death is acute respiratory distress syndrome. The other causes of death are severe pneumonia with multifocal infiltrates (leading to sepsis), high fever (leading to neurological problems), dehydration (from excessive vomiting and diarrhoea) and electrolyte imbalance. Fatalities are more likely in young children (<5 years), elderly (>65 years) and in people with underlying conditions, which include pregnancy, asthma, lung diseases, diabetes, morbid obesity, autoimmune disorders, immunosuppressive therapies, neurological disorders and cardiovascular disease.22
Laboratory Diagnosis
All diagnostic laboratory work on clinical samples from suspected cases of virus infection should be done in a Biosafety Level 2 (BSL-2) Laboratory. Suspected cases of novel infection should have respiratory specimens (nasopharyngeal, nasal or oropharyngeal swab, bronchoalveolar lavage and endotracheal aspirate) collected to test for the 2009 flu virus. Specimens should be placed into sterile viral transport media (VTM) and to be kept at 4°C. Real time reverse transcriptase polymerase chain reaction (RT-PCR) is the recommended sensitive method for the detection of virus, as well as to differentiate between pandemic 2009 and regular seasonal flu.23 The other rapid influenza diagnostic tests (RIDTs), although provide results within 30 minutes or even less, none of these tests can distinguish between influenza A virus subtypes. Moreover, RIDTs do not provide any information about antiviral drug susceptibility. Isolation of the virus in cell cultures or embryonated eggs is another method for diagnosis of infection, but may not yield timely results for clinical management and negative viral culture does not exclude the influenza A infection.
However, most people with flu symptoms do not need a test for pandemic 2009 flu, specifically because the test results usually do not affect the recommended course of treatment. The CDC recommends testing only for people who are hospitalised with suspected flu and persons having underlying medical conditions and those with weak immune systems.24 It is also expressed that treatment should not be delayed by waiting for laboratory confirmation of test results, but rather make diagnosis based on clinical and epidemiological backgrounds and start treatment early.
Treatment
The virus isolates in the 2009 outbreak are found to be resistant to amantidine and rimantidine. The CDC recommends the use of neuraminidase inhibitors as the drugs of choice for treatment and prevention of 2009 influenza in both children and adults.25 Tamiflu (oseltamivir phosphate) and Relenza (zanamivir) are the two FDA-approved influenza antiviral drugs and a third neuraminidase inhibitor peramivir is an experimental drug approved for hospitalised patients in cases where the other available methods of treatment are ineffective or unavailable. Antiviral drugs not only make the illness milder but also prevent serious flu complications. However, the majority of people infected with the virus make a full recovery without requiring medical attention or antiviral drugs. Treatment is recommended for patients with confirmed or suspected 2009 influenza who have severe, complicated or progressive illness or who are hospitalised. People who are not from the at-risk group and have persistent or rapidly worsening symptoms should also be treated with antivirals. Therapy should be started as soon as possible, since evidence of benefit is strongest when treatment is started within 48 hours of illness onset.26 Treatment should not be delayed while awaiting the results of diagnostic testing nor should it be withheld in patients with indications for therapy who present >48 hours after the onset of symptoms. Beside antivirals, supportive care at home or in hospital, focuses on controlling fevers, relieving pain and maintaining fluid balance as well as identifying and treating any secondary infections or other medical problems.
Major Concern
The neuraminidase inhibitors oseltamivir and zanamivir provide valuable defences and have been used widely for treatment and chemoprophylaxis of 2009 pandemic influenza A.But the recent emergence of resistance to these antiviral drugs is a matter of immediate concern. Influenza A strain resistant to oseltamivir has been reported from a variety of geographical locales and poses a challenge for the management of severely compromised patients.27 The CDC warned that the indiscriminate use of antiviral medications to prevent and treat influenza could ease the way for drug resistant strains to emerge, which would make the fight against the pandemic much harder. Most of the patients recover spontaneously without any medical attention and use of antiviral medications should be reserved primarily for people hospitalised with pandemic flu and persons, with pre-existing or underlying medical conditions who are at higher risk for influenza-related complications. It has also been emphasised that early treatment once a patient has developed symptoms, rather than chemoprophylaxis, should reduce opportunities for the development of oseltamivir resistance.26 The degree to which these drugs will remain effective for the treatment of the novel strain of influenza in the coming months is still a question.
What’s next?
The only possible way to combat the situation is large scale immunization. Antiviral drugs are not a substitute for vaccinination and are used only as an adjunct to vaccines in the control of influenza. Vaccines are one of the most effective ways to protect people from contracting illness during epidemics and pandemics of influenza. The seasonal vaccines do not confer any protection against 2009 H1N1; new vaccines have been licensed and are available.28 The vaccines are available in both live-attenuated and inactivated formulations. Two types of vaccines are approved by the FDA for use in the prevention of 2009 pandemic influenza virus. These are TIV (“flu shot” of trivalent inactivated vaccine) and LAIV (nasal spray of live attenuated vaccine). The inactivated vaccine is contraindicated in patients with severe allergic reaction to eggs or any other component of the vaccine. The live attenuated vaccine is licensed for persons aged 2 through 49 years who are not pregnant, are not immunocompromised and have no underlying medical conditions. Children less than 5 years who have asthma and are taking long term aspirin therapy should also not receive live vaccines. Otherwise, both vaccines are safe and highly immunogenic and a single administration leads to robust immune response in 80% to 90% of adults aged 18-64 years and in 56% to 80% of adults aged 65 years and older with in about 10 days.29 Children younger than 10 years will require two administrations of the vaccine separated by at least 21 days. Adverse effects following vaccination are minor, just like those of seasonal influenza vaccine and are self limiting. Concerns regarding the risk of Guillain-Barre syndrome (GBS) after vaccination have been raised. Various studies have suggested that the risk of GBS is higher from influenza itself rather than from the vaccine and the other adverse effects.30 The CDC is now encouraging everyone including people of 65 years and above to get vaccinated against the 2009 strain of influenza.
The Government of India has recently approved a split virus, inactivated, non-adjuvant monovalent vaccine (Panenza by Sanofi Pasteur) to inoculate frontline health workers and those who have a high risk of getting infected.31 Groups of health care workers has also been singled out by the European council for attention and immunization.32 Infection control practices in the health care settings should be followed along with as per the guidelines.33 Patients should also be educated regarding the other preventive measures, including using tissues to cover their mouth and nose when coughing and sneezing, developing good hand washing techniques, use of alcohol based hand-rubs, avoiding contact with ill persons if possible and staying home when ill unless medical attention has been given.
The flu season seems to be dying down in 2010 but the war is yet not over. Lessons must be learnt from the previous influenza pandemics and it is still important to get vaccinated against the flu and be prepared, as activity as well as virulence might increase again in the coming season. The words of Margaret Chan (Director General, WHO) to be remembered that “the virus writes the rule and this one like all influenza viruses can change the rules, without rhyme or reason, at any time”.
‘Everyone thinks of changing the world, but no one thinks of changing himself.’ Leo Tolstoy (1828-1910)
Readers surely must have noticed by now how ‘client’, ‘service user’, ‘customer’, and other business terms have gained momentum in health care settings over the years. Newspeak has insidiously worked its way into all health policy documents. For reasons that escape me, in mental health services particularly, there seems to be an unwritten diktat that hospital personnel use any terminology other than ‘patient’ for those attending for treatment. Anyone who sets foot inside a hospital is now deemed to be a service user even though the word patient (from the Latin, patiens, for ‘one who suffers’) has not changed its meaning for centuries. Yet curiously, management Newspeak is not questioned or discussed openly by medical or nursing staff, perhaps for fear of being labelled old-fashioned, trying to cling on to relics of a bygone era. Subtle, unspoken, ‘nannying’ of health professionals in general, and a casual, perfunctory dismissal of matters medical now seem to be the order of the day.
The term ‘patient’ is now viewed sceptically by some in the management hierarchy as depicting an individual dependent on the nurse or doctor, rather than a token of respect for that person’s privacy and dignity. Non-clinical therapists are not obliged to use the term patient. What follows from that however, is the abstruse rationale that it is probably best to describe everyone as a ‘client’, ‘customer’, or ‘service user’ so as not to appear judgemental or create confusion. This apparently avoids ‘inferiority’ labelling and ensures all are ‘treated’ the same. Using the term ‘patient’, implies a rejection by doctors of multi-disciplinary team working, we are led to believe. There is a perceived, albeit unfounded notion, that the medical profession want to dominate those with mental healthproblems in particular by insisting on a biological model of illness and, by inference, pharmacological ‘chemical cosh’ treatments. At the heart of all this mumbo-jumbo lies the social model of care with its aim of ‘demedicalising’ the management of mental illness. This, ironically, seems at odds with medical practice where the emphasis has always been on a holistic approach to patient care. Yet an insistence on a social model of mental illness is as patronising to the patients that hospital managers purport to be caring for, as is the imagined ‘disempowerment’ model they want to dismantle. Some in the health management hierarchy contend that the word ‘patient’ fits poorly with today’s views of‘users’ taking an ‘active part’ in their own health care.1 Or does it? One may decide to have the cholecystectomy or the coronary bypass, when the acute cholecystitis and chest pain respectively have settled down, and select the time and date of the procedure, but I doubt whether one has any real ‘choice’ in the matter when the condition becomes critical, or that one will play an active part in the procedure itself.
The concept of empowerment, which has been around for decades, also seems to be enjoying a renaissance, being one of the current buzzwords in ‘modern’ health care. Other buzz phrases, among many, include ‘freedom of choice’, ‘equity’, ‘right to participation’, ‘increased role of the consumer.’ Empowerment, theoretically, enables new customers to stand up for themselves, demand their therapeutic rights and choose their own treatment. Fine when you are well. However, should I develop a serious illness, particularly one in which I have no great expertise, and because I cannot conceivably amass the entire body of medical knowledge before I see the doctor or nurse about my condition, I would prefer the physician/nurse to outline the treatment plan. I do not want to be called a client, customer or punter, because such derisory terms are more apt to make me feel, ironically, ‘disempowered’.
Why the change?
‘If you want to make enemies, try to change something.’ Woodrow Wilson (1856-1924)
What is it about doctors using the word ‘patient’ that health managers and non-medical therapists find so irritating and difficult to accept? Perhaps the answer lies in the doctor-patient relationship, akin to the attorney-client privilege afforded to the legal profession, so loathed by the judicial system. We are being swept along on a current of neutral, incongruous words such as ‘client’ (the most popular at present), ‘service user’ (this applies across the board), ‘consumer’ (Consuming what? I know my rights!), ‘customer’ (Do I get a warranty with this service? May I return the goods if they are unsatisfactory?) Better still, ‘ambulatory health seekers’ (the walking wounded) and ‘punters’ (a day at the races). The general trend it seems is for doctors to name one attending an appointment as ‘patient,’ midwives opt for ‘people’, social workers tend to speak of the ‘service user’, psychologists and occupational therapists prefer ‘client’, and psychoanalysts sometimes use the rather cumbersome description ‘analysand’. What is usually forgotten is that the person waiting in the analyst’s reception is no different from the humble stomach-ache sufferer.2
To most people ‘service user’ infers someone who uses a train or bus, or brings their car to a garage or petrol station. The term ‘user’ often denotes one who exploits another; it is also synonymous with ‘junkie’ and a myriad of other derogatory terms for those dependent on illegal drugs; ‘client’ has ambiguous overtones, and ‘people’ refers generally to the population or race, not to individuals receiving treatment. For general purposes a ‘client’ could be defined as a person who seeks the services of a solicitor, architect, hairdresser or harlot. There is also talk of ‘health clients’. Someone who goes to the gym perhaps! A customer is a person who purchases goods or services from another; it does not specifically imply an individual patient buying treatment from a clinician. Try to imagine the scenario of being told in your outpatient setting that a client with obsessive compulsive disorder, or a service user who is psychotic, or a customer with schizophrenia, is waiting to be seen. Although it is defies belief, this is how non-medical therapists portray patients. Would a medical doctor describe a person with haemorrhagic pancreatitis as a customer? Picture a physician and psychiatrist talking about the same person as a patient and customer respectively. Patients make appointments with their general practitioners. In psychiatry the terms are an incongruous depiction of the actual clinic setting in that most patients are not consumers or customers in the market sense; indeed many have little wish to buy mental health services; some go to extraordinary lengths to avoid them.3 Those who are regarded as in greatest need vehemently avoid and reject mental health services and have to be coerced into becoming ‘customers’ through the process of the mental health act.
What do our medical and surgical colleagues make of all this? Despite Newspeak insidiously weaving its way through other specialties, it does not seem to have permeated medicine or surgery to the same extent. Is psychiatry therefore alienating itself even further from other fields in medicine by aligning itself with this fluent psychobabble? Do cardiologists refer to patients with myocardial infarctions as customers? Does a patient with a pulmonary embolism or sarcoidosis feel more empowered when described as a punter? Changing the name does not address the illness or the factors in its causation. Perhaps one could be forgiven for using terms other than ‘patient’ for someone who wants plastic surgery to enhance their facial appearance, or a ‘tummy tuck’ to rid themselves of fatty tissue induced by overindulgence, or in more deserving cases, successive pregnancies. Readers will have no difficulty adding to the list. Such people are not ill. However, when describing a person with multiple myeloma, acute pulmonary oedema, intravascular disseminated coagulopathy or diabetic ketotic coma, I’m not so sure ‘consumer’ or ‘ambulatory health client,’ fits the profile. After all, a customer usually wants to ‘buy something’ of his/her own choosing. Now this may apply to ‘gastric banding’ or silicone implants, but there is not much choice on offer when one is in a hypoglycaemic coma or bedridden with multiple sclerosis.
Despite the above, when people were actually asked how they would prefer to be described by a psychiatrist or by a general practitioner,67% and 75% preferred ‘patient’ respectively.4 Another survey revealed a slightly higher preference (77%) for ‘patient’.5 One might argue that such results depend on the setting where the surveys were carried out and by whom. However, logic dictates that if I am in the supermarket waiting to be served, I would assume I am a customer; while attending the general practitioner’s surgery for some ailment, I would imagine I am there as a patient. Such surveys are conveniently ignored by service providers. So what does it matter? It matters because the lack of direct contact between managers and patients puts the former at a great disadvantage and leads one to question their competence and credibility when accounting for patient preferences. Perhaps managers should ‘shadow’ physicians and surgeons to fully understand why the people they treat are called patients. Psychiatry is not a good example of normal medical practice since so many of its adherents possess the illusory fantasy of being ‘experts in living’, and not physicians whose aim is to diagnose and treat.
Be patient
‘The art of medicine consists in amusing the patient while nature cures the disease.’ Voltaire (1694-1778)
It is noticeable that ‘patient’ remains the preferred usage by the media, press, and cabinet ministers, and of course, by medical and surgical teams. The implicit meaning of the word ‘patient’ is that someone is being cared for, and the media at least seem to respect this. Ironically, in the field of mental health, clinicians will often write letters to other professionals referring to an ill person as a ‘patient’ in one paragraph, and a ‘client’ in the next! Doubt and equivocation reign. It is as if the stigma of mental illness will evaporate if we gradually stop talking about sufferers as patients, and ‘empower’ them by describing them as ‘customers.’ There is ambiguity in the terminology itself. The term service user is the most disliked term among those who consult mental health professionals.6 The terms are also used interchangeably, with ‘customers’ and ‘service users’ described in the same breath. What do we call a drug-user? - a service user drug-user or a drug-user service user, a customer who uses drugs, or a drug-using customer? How does one accurately describe an individual using alcohol and illegal drugs? Is an infant suffering from respiratory distress syndrome or a child moribund with bacterial meningitis an active participant in his/her health care? In theory, they are service users. What about young people among whom substance misuse is prevalent?7 Do we label and stigmatise them as drug clients or drug customers? Will the outpatient and inpatient departments be redesignated as out-service or in-service user clinics? Oxymoronic terms such as ‘health clients’ do not convey any meaning when applied to hospital patients. Doubtless, critics with their customary predictability will lamely and with gloating schadenfreude, accuse the medical profession of bemoaning their loss of hegemony in health care matters, but their arguments are specious, stem from a lingering resentment of the medical profession, and amount to little.
In other areas of health some argue that making choices about lifestyle, and seeking advice on matters such as fertility, liposuction, gastric banding, or cosmeticsurgery, do not require one to be called a patient, and rightly so. Such information is freely available at clinics and on the Internet, and therefore does not require the advice of a doctor per se, until the actual procedure is imminent. However, it would be inconceivable for a patient undergoing say, a laparoscopic bypass or sleeve mastectomy for obesity, not to heed the views of the surgeon performing the procedure itself, the success rate, and complications. Whether to have the operation is a different matter. Similarly, individuals who want to engage in psychological therapies such as cognitive or psychoanalytic, or who would rather indulge in an expensive course of ‘emotional healing’, can choose for themselves. Neither does one need to see a nurse practitioner or general practitioner for a mild upper respiratory tract infection. Such people are not suffering from any serious medical illness (an enduring feeling of being physically or mentally unwell) in the true sense of the word.
When all is said and done, most people are unschooled in etymology, and condemning words because of their remote origins is pointless. Words change in meaning over time. Often they take on a new meaning, all too obvious in teenage slang. The word ‘wicked’ used to mean sinful, now it refers to something ‘cool’ (another word that has changed its meaning). Besides, if ‘patient’ really is that offensive, it seems odd that it has retained unchallenged supremacy in the United States,the centre of consumerist medicine, where the patient is quite definitely a partner.8
Physicians do not want to return to the days of paternalistic and condescending medicine, where deferential, passive patients were at the mercy of the stereotypical omniscient, omnipotent doctor or nurse matron. Likewise, patients do not want to be treated like products in order to achieve targets for the government health police. Patients nowadays are generally more confident and better informed about their conditions, in other words, already empowered, than in days gone by, particularly with the advent of the Internet (alas, here misinformation also abounds) and this is welcome. Therefore, if you are relatively well you can choose a treatment to suit your lifestyle. Unfortunately, not many patients suffering from chronic illnesses, for example, schizophrenia in some cases, or a degenerative condition such as motor neurone disease, feel empowered. I might feel empowered when I can decide to have one therapy or another, say, cognitive as opposed to solution-focussed therapy. I somehow doubt whether I would feel equal in status to, or more empowered than, the surgeon who is performing a splenectomy on me for traumatic splenic rupture.
The thrust of all this is that nothing is thought through; everything consists of ‘sound bites’ and ‘catchphrases’, and the sound bites become increasingly absurd the more one scrutinises the terminology. The medical and nursing profession should only be tending to people who are ill or recovering from illness. Of course other staff are directly or indirectly involved in patient care and follow-up. Physiotherapy is a good example. Nonetheless the title patient remains the same. Therefore let us be clear about the definition: those who suffer from an illness are patients; those who are not ill can be called service users, or whatever term takes your fancy.
A 87-year old man was referred to hospital with a five day history of lethargy and increased urinary frequency. He denied symptoms of gastrointestinal bleeding or abdominal pain. His past medical history included diabetes mellitus, chronic kidney disease, peripheral vascular disease and surgery for repair of ruptured aortic aneurysm 6 weeks ago. Systemic examination, including per rectal examination, was normal. Haemoglobin was 83g/L and C-reactive protein was 148 (Normal <5). Twelve hours after admission he developed pyrexia (37.8 degree) accompanied with tachycardia (103 beats per minute) and hypotension (BP 87/43). Soon afterwards, he had a small amount (<50 mls) of fresh haemetemesis. He also complained of lower back pain and clinical examination revealed tenderness in the left iliac fossa. He was cross-matched for blood and initiated on intra-venous fluids. As his Rockall score was six an urgent oesophago-gastro-duodenoscopy (OGD) was planned. Over the next few hours he complained of increasing central abdominal pain and had several episodes of melaena. In view of the history of recent aortic surgery and current GI bleed the possibility of aorto-enteric fistula (AEF) was considered. An urgent contrast CT scan of the abdomen (Figure 1) was therefore arranged prior to OGD.
Figure 1: Contrast CT scan demonstrating the aorta (A) with extravasation of contrast (B) and a large collection (C) around it with trapped air suggestive of infection.
Contrast computed tomogram (CT) scan of the abdomen revealed an inflammatory soft tissue mass anterior to the infra-renal aortic graft with pockets of gas and leakage of contrast into it. These findings were suggestive of an AEF. The patient was informed of the diagnosis of AEF and the need for emergency surgical repair to which he consented. During the operation the vascular surgeons found that the duodenum was adherent to the aortic graft with evidence of fistulisation and infection, thus confirming the diagnosis. Although operative repair appeared to be successful, the patient continued to bleed on the table due to disseminated intravascular coagulation and died twenty fours after admission.
Discussion
AEF is defined as a communication between the aorta and the GI tract.1 The diagnosis of AEF should be considered in every patient with a GI bleed and a past history of aortic surgery.2 Our case patient had had emergency repair of a ruptured aortic aneurysm with a prosthetic graft 6 weeks prior to his current admission.
AEFs are a rare cause of gastro-intestinal (GI) hemorrhage. AEFs can be primary or secondary. Primary AEF (PAEF) is a communication between the native aorta and the GI tract.1 The incidence of PAEF ranges from 0.04 to 0.07%.3 PAEFs commonly arise from an abdominal aortic aneurysm of which 85% are atherosclerotic.1
Secondary AEFs (SAEF) are an uncommon complication of abdominal aortic reconstruction.4The incidence of SAEF ranges from 0.6% - 4%.5 Generally two types of SAEFs have been described. Type 1, termed as true AEF develops between the proximal aortic suture and the bowel wall. These usually present with massive upper GI hemorrhage.4 Type 2, or the paraprosthetic–enteric fistula does not develop a communication between the bowel and the graft and accounts for 15% to 20% of SAEFs.4 In this type of fistula, bleeding occurs from the edges of the eroded bowel by mechanical pulsations of the aortic graft. Sepsis is more frequently associated with this type of AEF (75% of cases).4 The mean time interval between surgery and presentation with SAEF is about 32 months6 but the time interval can vary from 2 days to 23 years.7 AEFs can involve any segment of the GI tract but, 75% involve the third part of the duodenum and the affected part is generally proximal to the aortic graft.8
The pathogenesis of AEF is not fully understood but two theories exist. One theory suggests repeated mechanical trauma between the pulsating aorta and duodenum causes fistula formation and the other suggests low-grade infection as the primary event with abscess formation and subsequent erosion through the bowel wall.9 The latter theory is felt to be most likely. The majority of grafts show signs of infection at the time of bleeding and up to 85% of cases have blood cultures positive for enteric organisms.10
The main symptom of AEF is GI bleeding. Secondary AEFs have been traditionally said to present with a symptom triad (as in our patient) of abdominal pain, GI bleeding and sepsis; however, only 30% of patients present in this manner.11 Patients often have a “herald bleed” which is defined as a brisk bleed associated with hypotension and hematemesis that stops spontaneously followed by massive gastro-intestinal haemorrhage in 20% – 100% of patients.8 Sometimes the GI bleeding can be intermittent.
The commonest investigations for diagnosis of AEFs are OGD, conventional contrast CT scan and angiography.12 OGD is often the initial investigation, as in any upper GI bleed mainly because of lack of clinical suspicion of the diagnosis. The endoscopic findings vary from those of a graft protruding through the bowel wall to fresh bleeding in distal duodenum to that of an adherent clot or extrinsic compression by a pulsating mass with a suture line protruding into the duodenum.13 Less than 40% of patients have signs of active bleeding at OGD.8 Conventional CT with contrast is widely available and most commonly performed to diagnose AEFs. Perigraft extravasation of contrast is a pathognomic sign of AEF and this may be associated with signs of graft infection i.e perigraft fluid and soft tissue thickening along with gas.12 Multi-detector CT and MRI are more sensitive diagnostic imaging tools with MRI now being used mainly in patients with renal failure to avoid the use of contrast.12
PAEFs can be treated with endovascular stent placement in selected cases especially in those who cannot tolerate emergency surgery.12 The treatment of choice in SAEFs is graft resection and establishment of an extra-anastomotic circulation with repair of the duodenal wall although overall survival rates vary from 30% to 70%.13
Conclusion
SAEFs are a catastrophic complication of aortic surgery. AEFs are relatively rare and need a high index of suspicion in the appropriate clinical situation in order to diagnose this condition. Left untreated they are universally fatal. Surgical repair carries a very high mortality.
A 55 year old white male with a history of hypertension, hyperlipidemia, smoking and transient ischaemic attacks was admitted to the hospital with worsening dyspneoa on exertion over a period of 6 weeks. He also reported significant weight loss, loss of appetite and fatigue over several weeks. Physical examination revealed tachycardia, and moderate respiratory distress with prominent jugular venous distention. Cardiac auscultation revealed normal S1 and loud P2. Also heard were an early diastolic heart sound ( tumour plop) and a mid-diastolic murmur at the apex. An ECG revealed evidence of left ventricular hypertrophy with repolarization abnormalities. A transthoracic echocardiogram (Figures 1 and 2) revealed a large, pedunculated, mobile left atrial mass measuring 3x4 cm, impinging on the mitral orifice with a mean gradient across the mitral valve of 15 mm Hg. Left ventricular systolic function was normal.
Figure 1: Parasternal long axis echocardiograph of the left atrial myxoma prolapsing into the mitral valve during diastole.
A diagnosis of probable left atrial myxoma was made. The patient had four episodes of syncope within 24 hrs, the first at 3: 53 am after returning from the bathroom, subsequently leading to cardiac arrest at 14:20 pm.
Figure 2: parasternal long axis echocardiograph showing the large left atrial myxoma during systole.
He was intubated and initiated on vasopressors. An emergent Left heart catheterization was performed prior to a referral for surgical excision, which revealed triple vessel coronary artery disease. During cardiac catheterization the patient became more hypotensive requiring an intra-aortic balloon pump. While arrangements were made for a referral for surgery, the patient’s clinical condition deteriorated rapidly and he went into pulseless electrical activity at 18:54 pm and could not be resuscitated. The patient’s death was presumably due to persistent intracardiac obstruction. On autopsy, the left atrial mass was identified as a haemorrhagic left atrial myxoma, 5x4x3.5cm in size attached by a stalk to an inter-atrial septum. Multiple organizsing thrombi were present in the 1tumour. Histology showed abundant ground substance with stellate myxoma cells and haemosiderin-laden macrophages (Figures 3 and 4). The cause of death was attributed to valvular “ball-valve” obstruction.
Figure 3: Histopathology of left atrial myxoma showing spindle shaped myxoma cells (white arrow) in a myxoid matrix (black arrow) and blood vessels (top arrow) (H & E 40X)
Figure 4: Histopathology of left atrial myxoma showing vascular spaces filled with relatively fresh blood and evidence of old bleeding (hemosiderin) suggesting repeated episodes of hemorrhage within the myxoma (H & E 4X)
Case 2
A 57 year old African American female presented with recurrent syncopal episodes and dyspnea on exertion, orthopnea, leg swelling, abdominal distention, loss of appetite and fatigue for the preceding nine months. Physical examination revealed jugular venous distention, a displaced apical cardiac impulse, a parasternal heave, and a loud S2. Also detected were a pan-systolic murmur at the lower left sternal border, an early diastolic heart sound with a mid diastolic murmur at the apex, bibasilar crackles, ascites, and oedema up to the thighs.
Significant laboratory values were a total bilirubin of 1.6 mg/dl, and B- Type Natriuretic Peptide of 1323 pg/ml. A chest x-ray revealed an enlarged cardiac silhouette, right lung atelectasis and effusion. An ECG revealed left atrial and right ventricular enlargement.
The patient was admitted with the diagnosis of new onset congestive heart failure and was treated with intravenous lasix, and fosinopril. A 2-D Echocardiogram revealed a large mass suggestive of myxoma in the left atrium measuring 4.5 x 7.5 cm, occupying the entire left atrium protruding through the mitral valve into the left ventricle (Figure 5) .
Figure 5: Apical four chamber echocardiograph of the left atrial myxoma prolapsing into the mitral valve during diastole.
This mass was obstructing flow with a mean trans mitral gradient of 17 mm Hg, with a reduced stroke volume and severe pulmonary hypertension with an estimated Right Ventricular systolic pressure of 120 mm hg. A presumptive diagnosis of left atrial myxoma was made and the patient was scheduled for its surgical removal the following morning. The patient was transferred to the intensive care unit for closer monitoring; and fosinopril and lasix were discontinued. At about 22:30 hours that night patient was noted to be hypotensive with systolic blood pressure of around 80mm Hg. The patient was treated with normal saline and concentrated albumin. She then developed acute respiratory distress at 23:00 hours requiring intubation and ventilator support. Intravenous dobutamine, dopamine and later norepinephrine were added for continued hypotension. The patient went into pulseless electrical activity, she was successfully coded with a return of her pulse but continued to be hypotensive. Cardiothoracic surgery decided not to take the patient for emergency surgery due to her unstable haemodynamic condition. The patient’s family was notified of the poor prognosis and the decision was made not to resuscitate her if her condition deteriorated further. The patient ultimately became bradycardic and went into asystole at 5: 30 am. An autopsy was not performed. The cause of death was attributed to large left atrial myxoma causing valvular obstruction and cardiovascular collapse.
Discussion
These two cases illustrate an uncommon, malignant course of a left atrial myxoma with rapid progression of symptoms which proved fatal. The most common primary tumour of the heart is myxoma accounting for 40-50% of primary cardiac tumours(2,3) .Nearly 90% of myxomas occur in the left atrium(3) .In over 50% of patients, left atrial myxoma causes symptoms of mitral stenosis or obstruction. Systemic embolic phenomena are known to occur in 30-40% of patients(3) .
Table 1. Summary of 17 published cases of sudden cardiac death associated with cardiac myxoma in adults (1950-2008)
Author/Reference
Year
No
Age
Gender
Symptoms
Interval Between Symptoms To Scd
Size Of Myxoma In Cm
Autopsy
Vassiliadis (8)
1997
1
17
M
Dizziness
3 months
6
yes
McAllister (10)
1978
5
40 to60
NA
NA
NA
5 to 6
yes
Cina (2)
1996
6
Below 40
NA
Embolic, syncope
16.6 months
5.7
yes
Puff (9)
1986
1
41
M
Syncope,
months
1.5
yes
Puff (9)
1986
1
19
F
Syncope
6 months
3
yes
Maruyama (7)
1999
1
20
M
Dizziness
1 day
8
None, Patient survived SCD; Myxoma resected
Turkman (6)
2007
1
73
M
DOE
months
8
yes
Ito (13)
1987
1
28
M
Syncope
7 days
NA
yes
NA: not available
Constitutional symptoms reported in approximately 20% of patients include myalgia, muscle weakness, athralgia, fever, fatigue, and weight loss. Around 20% of cardiac myxomas are asymptomatic (3) .Severe dizziness/syncope is experienced by approximately 20% of patients due to obstruction of the mitral valve. (4) Of all the symptoms associated with cardiac myxomas, syncope is one of the most ominous prognostic indicators.
Although sudden death is known to occur in patients with primary cardiac tumour it is rare and is estimated to constitute 0.01 to 0.005% of all sudden deaths (1). Association between sudden death and cardiac myxoma has been reported as early as 1953 by Madonia et al (5). A review of the literature on this subject between 1950 to2008 revealed 17 cases of sudden death attributed to cardiac myxoma in adults (1, 6, 7, 8, 9, 10, 13)(Table 1) .
In all patients with unexpected death syncope was a predominant presenting symptom and their age ranged from seventeen to seventy three. The majority of patients with sudden death were men even though the tumour is more common in women. The size of the tumour did not influence clinical presentation and in some reports of sudden cardiac death tumour was as small as 1.5 cm and without previous symptoms (3). Sudden death in myxoma is attributed to either severe acute disturbance in cardiac haemodynamics from cardiac obstruction (ball-valve syndrome) or to coronary embolization from the tumour. The latter is probably responsible for sudden death in patients with very small tumours. In the study of Alverez Sabin et al (11) the initial neurological manifestation was Transient Ischemic Attack (TIA), but in none of the patients’ was a diagnosis of myxoma made because of the initial neurological symptom. Even though cardiac myxomas are a rare cause of TIA and syncope, it is important to consider cardiac myxoma in the differential diagnosis of any patient with a TIA or syncope (11). The patients presented here had a TIA and recurrent syncope placing them at high risk for sudden death.
The timing of surgical excision of myxoma is not clear and it is not unusual for patients to die or experience a major complication while awaiting surgery (2, 12). Intraaortic balloon pump (IABP) use has been described in one case of left atrial myxoma and life-threatening cardiogenic shock with favorable outcome(14) .As illustrated by the cases presented here it is essential that surgery be performed urgently once it has been identified that a patient has a myxoma that is large enough to cause complete intracardiac obstruction.
Female sexual dysfunction (FSD) is a serious morbidity which could occur postnatally. It may lead to a variety of physical, psychological, and social adverse effects on the patient. Moreover, the consequent cycle of fear might compound the initial sexual disorder and makes it more difficult to treat. Therefore, early diagnosis and management of the problem become essential to avoid later sequalae on reproductive and sexual life. However, early diagnosis may be challenged by many factors. For example, many patients will be preoccupied by the newborn or embarrassed of talking about sexual matters after delivery, which makes it very important for the midwifery, medical, or other staff to raise the issue during the postnatal care sessions. The staff, on the other hand, might feel uncomfortable to discuss the sexual function with the client, or even may lack the knowledge and skills required for sexual health counselling. In addition to the client-service gap, there are gaps between different sexual service providers.
There are many types of postnatal sexual disorders. These types can differ widely in clinical features and management. Additionally, management of postpartum female sexual dysfunction (PPFSD) can vary with clinician’s experience. There are very few randomised clinical trials on treatment for PPFSD, which partly explains the service-service gap in PPFSD management.
In the last three decades there has been an increase in caesarean section rate in the developed world due to many maternal and fetal indications, especially with the significant improvement in surgical and postoperative care. Recently, more attention has been paid to the positive role the caesarean section may play in protecting the female pelvic floor from birth trauma. Perineal birth trauma has been accused by many authors of adversely affecting the female sexual well being. 1 On the other hand there is a growing opinion that the quality of postnatal sexual health is unrelated to mode of delivery. 2 The previous two contradictory statements from literature illustrate the real size of the dilemma when we try to counsel a woman requesting a caesarean section as she is worried about sexual dysfunction after vaginal delivery. This problem might become more difficult to solve if the woman already suffers from a sexual disorder (for example: dyspareunia) in the antenatal or preconception period.
Female sexual dysfunction is impaired or inadequate ability of a woman to engage in or enjoy satisfactory sexual intercourse and orgasm. There are certain natural events in a woman’s life when she is at increased risk of developing sexual dysfunction, such as the use of contraception pills, menstruation, postpartum and lactation status, perimenopause, and postmenopause. This could be related to fluctuations in gonadal hormone secretion, making women more vulnerable to sexual symptoms.3 Postpartum female sexual dysfunction (PPFSD) is a common health problem with different incidence reported in literature. Xu et al reported an incidence of 70.6% of PPSFD in the first 3 months after delivery falling off to 55.6% during the 4th-6th months, and reduced to 34.2% at the 6th month, but not reaching pre-pregnancy levels of 7.17%. 2
For the purpose of this piece of writing, the classification of sexual dysfunction put forth by the American Psychiatric Association APA (1994) in the Diagnostic and Statistical Manual, 4th Edition (DSM-IV) is used to help understand the differing presentations of PPFSD. 4 The main postpartum female sexual dysfunction categories are: sexual desire dysfunction (Hypoactive Sexual Desire Disorder), sexual pain disorders (which includes dyspareunia, vaginismus. and vulvodynia), sexual arousal disorder, and female orgasmic disorder.
To help in understanding this classification better, it is important to refer to the early research done in this field by Masters and Johnson in 1966. One of the most interesting findings of the latter has been the four stage model of sexual response, which they described as the human sexual response cycle.5 They divided the human sexual response cycle into four stages: Excitement phase (initial arousal), Plateau phase (at full arousal, but not yet at orgasm), Orgasm, and Resolution phase (after orgasm). 5
Although it is normal to have hypoactive sexual desire (loss of libido) in the first 6-7 weeks after giving birth, this becomes abnormal when the desire for sexual activity is persistently reduced or absent causing distress in the relationship. Sexual desire disorder after delivery may be due to the mother being preoccupied with the neonate or postpartum complications (e.g. infection, pain, and bleeding). It can often be associated with sexual pain disorder as well.
Dyspareunia is the most common type of PPFSD. Solana-Arellano et al (2008) reported an incidence of 41.3% for dyspareunia in the 60-180 days period after giving birth.1 Postpartum dyspareunia may be due to medical (physical) problems such as a mal-healed perineal or vaginal tear, postpartum infection, cystitis, arthritis, or haemorrhoids, which may get worse after delivery. Moreover, dyspareunia might be caused by psychosocial factors like problems in relationship with the partner, work stress, financial crisis, depression, and anxiety. Dyspareunia, in many cases, can occur as a result of a combination of medical and psychosocial factors. Although, vaginismus is recognised as a different identity, it is usually associated with dyspareunia when it happens in the Puerperium. Vaginismus is the involuntary spasm of the pubococcygeal muscles causing difficult and painful penetration. Sexual desire disorders, Isolated postpartum sexual arousal and orgasmic disorder are rarely seen in postnatal clinics as when they occur they tend to be part of other PPFSDs.
Methods:
Risk Factors for PPFSD:To assess the risk factors for PPFSD a literature review was performed using the National Health Library database including all resources ( AMED, BNI, CINAHL, EMBASE, HEALTH BUSINESS ELITE, HMIC, MEDLINE, and PsycINFO). The MESH word/s used was (postpartum sexual dysfunction OR postpartum dyspareunia OR dyspareunia after delivery OR sexual dysfunction after delivery OR sexual problems after delivery). Other different MESH words (using the word sexuality and/or puerperium) were used as well to expand the search possibilities. Only studies discussing the risk factors of PPFSD after vaginal birth were included. Perineal pain as a complication after episiotomy or tears was differentiated from dyspareunia, and studies on perineal pain after delivery were excluded from the review if they did not discuss the effect of the pain on sexual activity. Effect of Mode of Delivery:Searching the Cochrane library databases has shown no review related to the subject. However, Hicks et al (2004) have conducted a systematic review of literature focused on mode of delivery and the most commonly reported sexual health outcomes, which included dyspareunia, resumption of intercourse, and self-reported perception of sexual health/sexual problems.6 In their systematic review they suggested an association between assisted vaginal delivery and some degree of sexual dysfunction but they reported that associations between Caesarean delivery and sexual dysfunction were inconsistent and continued research was necessary to identify modifiable risk factors for sexual problems related to method of delivery.6 Hicks et al have searched PubMed, CINAHL, and Cochrane databases from January 1990 to September 2003, 6 so we have tried to continue the review by looking into the literature database after that date. To assess the effect of mode of delivery on PPFSD (Caesarean section vs. vaginal birth), a literature review was performed using the National Health Library database including all resource ( AMED, BNI, CINAHL, EMBASE, HEALTH BUSINESS ELITE, HMIC, MEDLINE, and PsycINFO) from October 2003 to January 2010. New MESH words were used, related to comparison between different modes of delivery (Caesarean section, vaginal birth, modes of delivery, sexual dysfunction, sexual disorder, dyspareunia). Additional studies from the reference lists were obtained. Only studies directly compared between caesarean section and vaginal birth in term of assessing the PPFSD were included. Results: Risk Factors for PPFSD:Nineteen studies and one systematic review were retrieved in the period from 01/01/1984 to 01/01/2010. The Cochrane library database review did not have related articles. It is worth mentioning, however, that there was a Cochrane review on postpartum perineal short term pain, not related to sexual activity. Therefore, it was excluded from this review. The systematic review included in this list of literature studies is the Langer and Minetti review on the complications of episiotomy.7 Having systematically reviewed four hundred seventy two articles on the Medline database, they concluded that episiotomy, whether medial or mediolateral, appeared to be the cause of more dyspareunia in comparison to spontaneous perinea tears.7 However, there was no significant difference in the incidence of dyspareunia beyond the three month period after delivery.7 After the latter review, Solana-Arellano (2008) have showed that complications of episiotomy are an important risk factor for postpartum dyspareunia. 1 They have found that infection, dehiscence, and constricted introitus complicated an episiotomy can cause long-term postpartum dyspareunia.1 Moreover, Ejegard et al have investigated the long term quality of women's sex life (12-18 months after first episiotomy-assisted delivery).8 They have reported an adverse effect of episiotomy on women's sex life during the second year post partum .8 Effect of Mode of Delivery:Only eight studies fulfilled the criteria. Full papers were retrieved. There was one randomised controlled trial, one prospective cohort study, one cross-sectional study, and the other 5 were performed retrospectively (4 questionnaire surveys, and 1 interview survey). The total pool sample of patients studied included 3476 cases (1185 Caesarean sections vs. 2291 vaginal deliveries). Four studies aimed to compare PPFSD aspects within other variants, such as pelvic floor morbidity, urinary incontinence, and faecal or flatus incontinence.9, 10, 11, 12 the other four studies purely compared PPFSD variants such as dyspareunia with no other pelvic floor morbidity variants.13, 14, 15, 16 There has been an agreement between the studies on the less sexual problems after caesarean section compared to vaginal delivery in the short term after delivery (i.e. up to 3 months postpartum). However, in long term (i.e. more than 12 months postpartum), the outcome was controversial. A meta-analysis was conducted to compare and summarise the long term PPFSD results ( graph 1) . Graph 1: Forest plot of comparison between studies. Studies to left of the midline were in favour of less long term PPFSD symptoms with caesarean section compared to vaginal delivery. Discussion: From the previous results, birth tract trauma is a risk factor which may lead to PPFSD. Therefore it is a logic presumption to think that avoiding pelvic floor injury by performing a caesarean section especially as an elective mode of delivery may alleviate PPFSD. This presumption, if true, will have very significant clinical and financial implications in practice especially with a pre-existing problem of increasing caesarean section rate in many parts of the developed world. So what research evidence in the literature is available to support or overrule this presumption?. The answer to this question becomes more challenging if we know that the British National Sentinel Caesarean Section Audit showed that 50 percent of consultant obstetricians agreed with the statement ‘‘elective caesarean section will least affect the mother’s future sexual function’’ .17 From the previous meta-analysis, there is little evidence to support that a caesarean section may alleviate long term PPFSD compared to vaginal delivery (p=0,02). But, if we examine the studies’ subgroups and primary/secondary results in more details, this evidence sounds insufficient. Griffiths et al (2006) in their questionnaire survey of a 208 women from the Cardiff Birth Survey Database have showed a significant increase in the prevalence of dyspareunia two years after vaginal birth compared to caesarean section. 9 However, their comparison was between vaginal birth and elective caesarean section as they excluded emergency cases.9 Moreover; they found similar increase in the prevalence of urinary incontinence, incontinence of flatus and subjective depression in the vaginal birth group, which lead us to think whether the dyspareunia was related to these factors and not related to vaginal birth itself. In their paper they did not mention if vaginal birth with no tears or complications was associated with a higher incidence of dyspareunia. In contrast, Klein et al (2005) concluded that women who had intact perineum after vaginal birth had less dyspareunia than those underwent caesarean section.12 However, the incidence of dyspareunia in the latter study was higher among women who had an episiotomy with or without forceps.12 Similar findings were revealed by Buhling et al (2006) and Safarinejad et al (2009), who showed that persistence of dyspareunia longer than 6 months after delivery was the highest after operative vaginal delivery.15, 16 Buhling et al concluded that the incidence of persistent dyspareunia was similar in the caesarean section and the spontaneous vaginal birth without injury groups (approximately 3.5%), whereas, Safarinejad et al (2009) have shown that women after elective Caesarean section had the highest Female Sexual Function Index (FSFI) compared to other groups of delivery including the normal vaginal delivery without injury or episiotomy.15, 16 Although Safarinejad et al (2009) study was robust in many aspects, such as using FSFI and studying the sexual function score for both the women and their partners, I think the main weakness in the study that they included only primiparous women. 16 Therefore, we cannot generalise their findings on women in their second or more pregnancies. Moreover, as a previous caesarean section will increase the operative risk of the successive caesarean sections or will add more risk to the trial of labour if this is opted for in the future, we can expect a higher increased of sexual disorders in the following pregnancies. From previous discussion we found insufficient evidence to advocate a decision of performing a caesarean section on basis of alleviating PPFSD. This evidence is outweighed by the higher risk of caesarean section including bleeding, infection, anaesthesia risk, deep vein thrombosis, pulmonary embolism, impairment of future fertility, risk of scar dehiscence in next labour, injury to bladder and bowels and risk of fetal laceration. Author’s Conclusion: Risk Factors for PPFSD:In this review, there is good evidence to suggest that episiotomy is an important risk factor for short term PPFSD. However, there is little evidence to support a possible long term effect especially if other complications to episiotomy occurred later. Breastfeeding, and the use of progestogen-only pill as contraceptive are other risk factors identified by other studies .18, 19, 20 This may be caused by the low oestrogen level and the consequent dry vagina. 18, 19, 20 Other risk factors for PPFSD include the lack of postpartum sexual health counselling and treatment. 2, 21 Effect of Mode of Delivery:Postpartum female sexual disorder is a common problem which can be overlooked in practice sometimes. Awareness of the problem makes half of the solution. The other half consists of identifying the risk factors, careful antenatal and postnatal counselling and sexual health assessment, and educating women, their partners, and staff about diagnosis and management of the problem. Episiotomy and severe obstetric traumas are the main risk factors. Restricted use of episiotomy and early management of episiotomy complications can play an important role in preventing persistent PPFSD. There is insufficient evidence to suggest caesarean section as a better mode of delivery in term of preventing or alleviating PPFSD.
Humans have always been interested in altering their body. Whether through piercings or tattoos, for aesthetics, religious reasons, or self-expression, the practice of body modification is a well known art.1 One not as familiar or easily observed body modification type is genital piercings. Genital piercings (GP) are defined as developing a tract under the skin with a large bore needle to create an opening into the anatomical region for decorative ornaments such as jewelry.2-3 Historically, GPs are not a new procedure.
Currently, this once taboo practice is on the rise and more men with GP are presenting with a variety of medical needs to clinics and hospitals.3 From the rare Pubic Piercing (a piercing through the dorsal base of the penis) to the Guiche (a piercing through the perineum), the male genitalia provides ample area to pierce. Men commonly choose from nine different types of GP and often use three major types of piercing jewellery (Figure 1).3-6
Figure 1 Common Types of Genital Piercings (GP) Worn by Men
Illustrations by Larry Starr, Senior Design Specialist Texas Tech University Health Sciences Center. Text modified with permission: Urologic Nursing 2006, 26(3), 175-176.
This rapid growth trend is creating its own set of complications and questions among clinicians. The medical literature suggests the most common risks are infection and bleeding, but there are other structural considerations as well.3-4, 6-8 An example of this is with the most widely known and commonly encountered male GP, the Prince Albert; the jewellery pierces the urethral meatus, exiting through the ventral surface of the penis. The piercing effectively creates a fistula for urine to drain, and many men report experiencing the need to sit down during urination due to the change in stream and difficulty in aiming.3,4 Other reported single case histories of more severe complications are Fournier’s gangrene, urethral tears, priapism, post-coital bleeding or lost jewellery in female partners, paraphimosis, and recurrent sexually transmitted diseases.8-20
Given the variety of negative issues that could arise from GP, any subject related to the health and well being of men having an intimate piercing should be directed to a well informed clinician. Currently, when questions or problems arise, men are more likely to seek assistance from the internet or a piercer rather than a health care provider.3,21-22 Considering the limited medical literature, as well as the minimal availability of clinicians knowledgeable about body piercings and modifications, men with GP are at high risk for delays in appropriate treatment of complications related to piercings as well as for overall preventive healthcare. Over concentration on the presence of GP by clinicians could delay important health care.23
Our purpose for this study was to elucidate information about men with GP in order to aid the clinician in providing relevant information for patients considering GP, as well as to provide further scientific evidence by examining their demographics, risk behaviours, procedural motives and post-piercing experiences. Additionally, several motives or characteristics of those with body art such as depression, abuse, self-esteem, and need for uniqueness were examined.24-29 Authors of this study have experience in urology, various aspects of piercing, and two decades of published body art research.
Problems in attempting any study about those with GP is reaching a sizeable sample for a study and an acceptable data collection methodology as those with GP have a hidden variable of study, making it difficult to make contact. Networking or “snowball” sampling for data collection, as well as anonymous questionnaires, becomes one approach,30 but this also makes it difficult to validate if respondents actually have GP. In an effort to address this issue, survey questions were specifically written for individuals with GP, making it extremely difficult and time-consuming to answer if the respondents did not have applicable experiences. Previous research experience also indicates that after about 10-15 questions, interest can wane and the questionnaire will not be completed.3,7,31
Only two published studies could be located to provide preliminary information about individuals with GP.21,22 In the first study21 data, collected in 2000 and actually published in 2005 had a national convenience sample of 63 women and 83 men with nipple and/or GP. Forty-eight men in the study had GP; the average man was 31 years of age, single, heterosexual, Caucasian, in good-excellent health, who sought out annual physicals, possessed some college education, and spoke of moderately strong religious faith. Almost all were employed, reporting an average annual salary of $36,000, or higher. Over half admitted and continued their belief they were risk takers; many of them also had 3 or more general body piercings. Most did not smoke or use drugs routinely and in this study, no questions about alcohol use were asked. Their average age at first sexual intercourse was 15.7 (the national male average is 16.9).32 Of those that participated (37%) in sport activities or exercise, they reported with no problems. They voiced minimal, if any, regrets to obtaining a genital piercing and would repeat the procedure. The Prince Albert was the most common male GP. Few (12%) voiced any problems with their GP, with urinary flow changes and site hypersensitivity being the most frequently mentioned. Six participants stated partners had refused sexual intercourse with them after their GP. One case of STD (Gonorrhoea) was reported post-procedurally.
As the internet survey demonstrated marked success in reaching those with GP, a similar study was undertaken to query a larger cohort of men with GP to increase clinician awareness in caring for men with GP. Thus, a cross-sectional descriptive study of men with GP was conducted so the collected information could be compared with the previously mentioned studies of those with GP.21,22 To ensure that the rights and dignity of all research participants were protected, exempt study status was obtained for this study from the university institutional review board. Notices of the study and a request for participation were posted on a number of popular body piercing sites with the assistance of an internationally-known Expert Piercer. The survey was available on the web for a total of 6 months during late 2008 and early 2009.
Questionnaire
Questionnaire items were based on a review of literature, the Armstrong Team Piercing Attitude Survey,31 previous work examining women with GP, 3,21-22, 33 and recent findings about those with body art. 24-29 The study purpose and benefits were presented on the front page of the survey. The subjects were informed that completion of the survey indicated their consent to participate in the study and that they could stop at any point during the survey if they were uncomfortable with a question (s). Ethnicity was included to note GP acquisition patterns; the ethnic categories were not defined and participants self-reported. Assurances were provided that the information would be analyzed as group data and no identifying information would be sought. Respondents were encouraged to answer questions honestly and not to be offended by any questions as some of them directly related to unsubstantiated assumptions written about GP in the medical literature. 21-22 There was no ability to tabulate how many individuals viewed the survey if they did not start the survey.
The survey had 4 sections: (a) Obtaining the GP (13 questions); (b) Personal experiences with the GP (32 questions); (c) General information including depression and abuse (26 questions), and (d) Sexual behaviour including forced sexual activity (12 questions). Four scales were also included: motives (14), outcomes (16), pre and post procedural self-esteem (16), and need for uniqueness (4). The previous reliabilities for the motive scale was 0.75,22 outcome scale 0.88,22 and need for uniqueness scale was 0.80;25 data was not available for the self-esteem scale.34 Various response formats were used throughout the survey such as a 5 point Likert scale (1 = strongly disagree or unlikely to 5 = strongly agree or likely), multiple choice, and short answers.
Data Analysis
The Statistical Package for the Social Sciences (16.0 Ed.) was used for data analysis to obtain frequencies, cross-tabulation, and chi-square analysis.30 Additionally, T-tests were used to compare means of similar questions from both the 2005 and 2008 studies with data from the current study. Significant differences were found in both study samples so they were judged as different groups from this current study.
RESULTS
Study Population
While 545 respondents started the survey, responses were analyzed from 445 men with GP (82%) residing in 42 states and 26 international countries; they declared a total of 656 piercings. Clusters of participants were evident from CA (22), NY (17), TX (16), FL (11), Europe (43), Canada (21), and Australia (20). Ages of the men with GP at survey time ranged from 15 to 72 (Table 1). The average participant in this study was 36 years of age, Caucasian, some college education, married, in excellent health, who sought out annual physicals, reported no/few friends with GPs, and declared a salary of $45,000 or higher. Religious beliefs were grouped into either non-existent or moderately to very strong faith. There was almost equal numbers of blue collar and white collar workers: others were from health care, arts, academia or military, while some were self-employed; very few mentioned unemployment, or retirement.
Table 1 Self-Reported Characteristics Of Men with Genital Piercings (GPs) continued
Demographics
Current Study* N = 445
Age at time of survey 20 or < 21-35 36-50 51+
61/29% 77/36% 41/19% 33/16%
Ethnicity Caucasian
319/89%
Martial Status Single Living/significant other Married with/out children
96/27% 69/20 143/41%
Education High school Diploma Some college Bachelor’s degree Graduate/Doctoral degree
34/10% 113/32% 77/22% 88/20%
Occupations Technical/vocational Professional Students Artists
90/28%
92/29%
44/14% 23/07%
Salary <45,000 $45,000+
135/44% 169/56%
Strength/Religious Faith Non-existent Mod Strong-Strong
135/39% 99/28%
State of Health Excellent
310/88%
Health care visits Annual physicals Only when problems
150/43% 142/40%
Close friends w/GPs None 1-3 4+
239/68% 100/28% 14/ 4%
Feel sad/depressed
Little/Some Pre-piercing Post-piercing
248/57% 210/59%
*Numbers will not always add up to 100 because of missing data or multiple answers.
Risk Behaviours
Those who reported pre-procedural risk taking tendencies continued to have significant tendencies for them post-procedurally (χ2 = 2.13) = 16; p = 0.000) (Table 2). Some risky behavior was observed; over half had body art, with an average of 2 piercings or more, as well as tattoos. Alcohol use was infrequent, but when they did, they had 5+ drinks. Other answers did not bear out the risk taker image with their monogamous, heterosexual relationships, limited tobacco, and drugs. Their average age at first intercourse was 17.05 (national male average 16.9).32 Most (391/88%) did not report STDs before their piercings, but of those that did itemize their STDs, Chlamydia was the most frequently mentioned (n =18).
Table 2 Self-Reported Risk Behavior From Men with Genital Piercings (GPs)
Risk Behaviour
Current Study* N = 445
Age at first intercourse Never had intercourse 12 or less 13-15 16-18 19+
12/03% 14/04% 80/25% 160/48% 74 /23%
Sexual Orientation Women
286/82%
Risk Taker Before Piercing
222/52%
Remains Risk Taker
198/52%
Cigarettes Smoked None ½-1 pack daily
252/75% 75/22%
Monthly Alcohol Consumption 1-3 times 5+ drinks @ one setting, 1-3x
118/33% 191/55%
Drugs Used monthly None 1-15 times
294/87% 27/08%
Sexual Partners in 6 months One Two or more
211/62% 98/32%
General body piercings None 1-4 piercings 5+ piercings
119/27% 259/59% 108/33%
Tattoos None 1-4 5+
115/35% 134/38% 76/21%
STDs before piercing
54/12%
*Numbers will not always add up to 100 because of missing data or multiple answers.
Genital Piercing Procedure
A deliberate time delay between their consideration to making the decision to have a GP was present as many had waited almost 5 years before procurement (Table 3). Over half reported the Prince Albert GP, with another third choosing a Frenum/Frenum Ladder (Figure 1). While a small-moderate amount of pain and bleeding was reported procedurally, virtually no drugs or alcohol were used before their GP.
Table 3 Self-Reported Procedural Information From Men with Genital Piercings (GPs)
Genital piercing procedure
Current Study* N = 445
Amt of decision time Waited long time, then a few minutes A long time (over a year)
49/24% 143/37%
Age of GP Decisions Consideration Procurement
29 years 34 years
Type of Genital Piercings Ampallang Apadavya Dydoe Foreskin Frenum/Frenum ladder Guiche Hafada Prince Albert Other
*Numbers will not always add up to 100 because of missing data or multiple answers.
Motives and Outcomes
Table 4 illustrates participant motives and outcomes for each group in the various GP studies.21,22 For the highest motive response of “just wanted one” there was consistency over the three studies; of the top five responses, they were similar but just ranked differently. Alpha measurements for the motive response scale ranged from 0.40 to 0. 75 except for our current study, where the covariance matrix was zero or approximately zero so the statistics based on its inverse matrix could not be computed. Motives centered around wanting a GP, trying something new, have more functional sexual control, and seeking uniqueness. Measureable outcomes (Alpha range 0.88-0.89) of their GP evolved around their sexual expression, uniqueness, and aesthetics, as well as the improvement of their personal and partner’s sexual pleasure. In review, their motives for the GP were met in their stated outcomes.
Table 4A Three Study Comparison Of Self-Reported Motives and Outcomes From Those Wearing Genital Piercings.
Variable
Caliendo et al, 2005 Study: Data Collected 2000 Men with GPs N = 48*
Young, et al, 2010 Study Data collected 2008 Women with GPs N = 240*
Current Study Data collected 2009 Men with GPs N = 445*
Motives for their genital piercing
34/71% “Just wanted one” 24/50% “Trying to feel sexier”
23/45% “For the heck of it”
18/38% “Wanted to be different” 18/38% “Make myself more attractive” (alpha 0.40)
163/70% “Just wanted one” 120/51% “Trying to feel sexier” 111/48% “More control over my body” 93/40% “Seeking uniqueness” 91/39% “Make myself more attractive” (alpha 0.75)
196/90% “Just wanted one”
73/60% “For the heck of it” 67/60% “Trying to feel sexier” 56/58% “More control over body” 51/56% “Seeking uniqueness” (alpha unobtainable)
Outcomes of their genital piercing
36/77% “Improved my sexual pleasure”
35/73% “Helped express myself sexually” 35/73% “Helped me feel unique”
176/76% “Helped express myself sexually 173/75% “Improved my sexual pleasure 157/68% “Helped me express myself 134/58% “Helped me feel feminine” 134/58% “Helped me feel unique” (alpha 0.88)
278/81% “Improved my sexual pleasure
234/71% “Helped express myself sexually”
218/67% “Helped me feel unique” 229/67% “Improved partners sexual pleasure 211/64% “Helped genital look better” (alpha 0.88)
*Numbers will not always add up to 100 because of missing data or multiple answers.
Post-piercing Experiences
The men reported continued satisfaction with their GP and would repeat the procedure. While not many were engaged in exercise/sport activities, those that did, were active (Table 5). A few reported partner refusal of sexual activities when their GP was in place. Almost half reported no piercing complications; of those that did, only 2 major problems were cited. First, with over half reporting Prince Albert piercings, it was not surprising that 25% discussed changes in their urinary flow. Site hypersensitivity was the second most reported problem (23%), otherwise there were no further trends of other severe complications. While 80 (18%) reported STDs after their GP, only 19 itemized the specific type: the most responses were Chlamydia (9). Those that had a history of STDs (Table 2 & 5) before their piercings were significantly more likely to have them post-procedurally (χ2 = 11.5) = 1; p = 0.001).
Table 5 Self-Reported Post Procedural Information From Men with Genital Piercings (GPs)
Complications from piercing No problems Change in urinary flow Site hypersensitivity Skin irritation Rips/tears at site Problems using condoms Keloids @ site Site infection Urinary tract infection Site hyposensitivity Sexual problems Jewellery embedded Erection problems Other, not named
*Numbers will not always add up to 100 because of missing data or multiple answers.
Depression, Abuse, Self-Esteem, and Need for Uniqueness
Four additional characteristics about individuals with GP were examined.24-29 Men with GP respondents reported a small amount of “sad or depressed feelings”; those that had these depressed feelings before their piercings were significantly more likely to continue these depressed feelings post-procedurally (χ2 = 4.1), = 16; p = 0.00). Only 5 (1%) reported being forced to participate in sexual activity against their will, while a few cited (56/12%) physical, emotional, or sexual abuse.
To extract a profile of self-esteem, 8 questions were asked in the pre and post piercing survey sections; internal consistency (Cronbach alpha) of both scales was 0.75. Their responses to both the pre procedure (M = 22.3, SD = 4.51) and the post piercing time (M = 23.1, SD = 3.97) was highly correlated at 0.79 (P<0.01). Two statements triggered split, negative and positive responses with “I make demands on myself that I would make on others” and “I blame myself when things do not work the way I expected.” Lastly, their Need for Uniqueness (NU) was asked using a four item scale24,25in the pre-piercing survey section. When all five responses of the scale were totaled (20), the mean was 11.3 documenting a more positive perspective about their GP, close to the moderate level (Cronbach alpha 0.86), for intentionally wanting to be different, distinctive, and unique. When asked if their overall feelings of NU had changed since obtaining their GPs, those that had NU before their piercings were significantly more likely to have them post-procedurally (χ2 = 11.5) = 16; p = 0.03).
DISCUSSION
When examining this data from men with GP alongside the 2005 published study,21 the cohort almost equalled 500 participants. To our knowledge this is the largest repository of data currently available to provide further evidence of the demographics and health issues regarding men with GP. The anonymous data, obtained by networking sampling and accessible, economical web-based survey, could be viewed as a study limitation. Yet, finding similarities between this data and data collected almost ten years ago suggests that our findings tapped into a core body of knowledge about men with GP. Similar data, obtained at different times, from different respondents increases the credibility and lends the information to further generalizability to influence use in practice.30
The “social reality” 2 of the GP phenomenon is here. All of the men had one type of GP, and some had multiple GP, and many had other general body piercings.35 Awareness of the current types of body modification including GP will help the clinician educate and inform adequately, to give professional advice, and also provide a realistic picture of structural considerations. Respondents stated their GP were an important and satisfying part of their life, they still liked them, and would repeat the procedure; the GP improved their sexual activities, few refused sexual intercourse, those that exercised were active, and they were not troubled by the GP complications. From a medical standpoint the insertion of a GP could be considered a minor surgical procedure, and yet the data suggests that when the GP is performed by experienced hands only minimal side effects are reported. Thus, finding a knowledgeable, expert piercer is an important educational theme. However, patients need to also be aware that certain types of piercing may require some behavioral changes such as toileting and consistent body cleaning. Unfortunately virtually no health care providers, including clinicians, were mentioned in the GP decision making process or care, they usually went to the internet or returned to a piercer for information.21,22 Hopefully, as more clinicians are made aware of GPs, those who are considering GP will find their physician to be a helpful and more informative resource.
These study participants with GP were older, well-educated men, often in a stable relationship, different than what is usually thought about people with body piercings.7, 22,26-27,29,31 This scientific evidence about their overall demographics pose challenges to the current medical literature. Sample demographics from this study and the other two cited GP studies 21,22 do not reflect individuals from stereotypical low performing social and economical backgrounds. Demographically, the people with GP were in their early thirties, Caucasian, heterosexual, well educated, employed, in good health, with some religious beliefs, but not ethnically diverse. In contrast to literature describing men with GP as antisocial miscreants or mostly homosexual, 2,4,18 our data support that these men are more part of the mainstream culture. The avoidance of “rushing to judgment”28 is an important aspect, especially in the way they are often perceived.
Men with GP did not deny their propensity to be risk takers, but being a risk taker was not synonymous with being deviant, but more with achieving individualization.21,28,31 Threads about stable relationships were provided throughout their information, including sexual orientation, marital status, GP complications, and even their lack of many risk behaviours. Their first time for sexual intercourse was close to the male national average. While procurement of any type of body art is thought to be impulsive 7,21-23, their time for GP decision-making was deliberate, as well as their practice of on-going, conscientious care of their piercings.21,22 Absence of alcohol and/or drug consumption before the GP procedure has been a frequent finding in other body art studies.7,21-22,31 Reputable piercing artists advocate for no use of alcohol and drugs as they want their customers to be making realistic procedural decisions about their GP and listening carefully to post GP care instructions.
The unsubstantiated assumptions in the literature about GP complications such as male infertility, scrotal infections, reduction of erotic stimulation, and frequent infections with bicycle rides were also challenged.6,21,36-40 Overall, only two problems of urinary flow changes and site hypersensitivity were reported with their GP. They took their sexual concerns seriously, as part of their internal influences of self esteem and their need for uniqueness. Their documented motives reflected sexual enhancement, aesthetics, as well as uniqueness. Their stated outcomes of the GPs reflected an ability to better express themselves sexually and create a sense of uniqueness; these elements obviously took precedence over the two problems of urinary flow changes and hypersensitivity. Both these motives and outcomes were similar when compared with the other two studies.21,22 Further procedural research is suggested to obtain more information about the reasons some with Prince Albert GP have urinary flow changes, while others do not, to eliminate this as a possible side-effect.
Negative bias continues with the assumption that individuals with GP frequently have STDs.18-20, 36-40 Historically, concern for those who have “exotic adornments” such as body piercings have led some health facilities to require STD screening, no matter what the nature of the presenting complaint.22,35 Yet, in this study and the other two related GP studies,21,22 respondents reported only a few STDs. Their reporting incidence of STD was low compared to the national Guttmacher Institute report of one in three sexually active people will have contracted a STD by age 24.32 As in this study, Chlamydia remains the most highly reported STD in the US.32 While it is important to always conduct a thorough sexual history, 20 perhaps the conscientious care related to the deliberate decision for the GP, and the mostly monogamous relationships reported may account for the limited reporting of STDs. One STD clinic study found that neither socioeconomic status, method of contraception, multiple partners, or the presence of genital infections correlated with GP.38 Further longitudinal research is suggested to examine the long-term effects of GPs, as well as further GP complications and STD prevalence.19
Men, like women, with GP21 reported depressed feelings26,27,29 both pre and post procedure, but gender differences were present with abuse and forced sexual activity. The men with GP reported few incidents of abuse (emotional, physical, or sexual) or forced sexual activity against their will whereas over a third of the women with GP22 reported this. Although women frequently spoke of their use of GPs to take more control in reclaiming their body to “free them from the bonds of molestation and give them strong feelings of empowerment,” 22 men verbalized their use of GPs to give them more sexual control.
STUDY LIMITATIONS
As with any study, several limitations to generalizability of data must be considered and one of methodology has been previously discussed. This was a non experimental, descriptive study design and the respondents self-selected to complete a web-based survey. Bias, inaccurate recall, and/or inflation can result from self-reporting.30 Respondents had to use their personal judgment to interpret questions with the use of an anonymous survey so socially desirable responses could have been entered. Participants with strong negative or positive feelings may have been more likely to complete the survey. Yet, as random sampling is almost impossible in a population with hidden variables, and in spite of these limitations, the respondents did contribute further quantitative data.21,22
CONCLUSIONS
The trend of those obtaining GP continues to increase and is not limited by age, gender, socio-economical backgrounds, or sexual preferences. Many in this study still reported seeking advice of a piercer or the internet. As an identified population at risk for quality health care, further evidence of demographics, piercings and jewellery, motivations, outcomes, and health issues were presented about men with GP so clinicians can provide clinically competent and applicable approaches for care. The collective data examined here, along with some collected almost ten years ago, begins to dispel some of the negative assumptions about this segment of the body modification population regarding their overall demographics, GP complications, and STD prevalence.
The Royal College of Psychiatrists first introduced the CASC in June 2008. It is based on the OSCE style of examination but is a novel method of assessment as it tests complex psychiatric skills in a series of observed interactions.2 OSCE (Observed Structured Clinical Examination) is a format of examination where candidates rotate through a series of stations, each station being marked by a different examiner. Before the CASC was introduced, candidates appeared for OSCE in Part 1 and the ‘Long Case’ in Part 2 of the MRCPsych examinations. The purpose of introducing of the CASC was to merge the two assessments.3
The first CASC diet tested skills in 12 stations in one circuit. Subsequently, 16 stations have been used in two circuits - one comprising eight ‘single’ and the other containing four pairs of ‘linked’ scenarios. Feedback is provided to unsuccessful candidates in the form of ‘Areas of Concern’.4 The pass rate has dropped from almost 60% in the first edition to around 30% in the most recent examination (figure 1). Reasons for this are not known. The cost of organising the examination has increased and candidates will be paying £885 to sit the examination in 2010 in the United Kingdom (figure 2).
Figure 1
We are sharing our experience of the CASC examination and we hope that it will be useful reading for trainees intending to appear for the CASC and for supervisors who are assisting trainees in preparation. In preparing this submission, we have also made use of some anecdotal observations of colleagues. We have also drawn from our experience in organising local MRCPsych CASC training and small group teaching employing video recording of interviews.
Figure 2
CASC is an evaluation of two domains of a psychiatric interview: ‘Content’ (the knowledge for what you need to do) and ‘Process’ (how you do it). The written Papers (1, 2 and 3) test the knowledge of candidates. We therefore feel that the candidates possess the essentials of the ‘Content’ domain. Therefore, the more difficult aspect is demonstrating an appropriate interview style to the examiner in the form of the ‘Process’.
This article discusses the preparation required before the examination followed by useful tips on the day of the examination.
Before the examination day (table 1)
Table 1: Tips before the examination day
Factor
Technique
- The mindset
- Have a positive attitude
- Time required
- Start preparing early
- Analysing areas for improvement
- Use ‘Areas of Concern’
- Practice
- Group setting and individual sessions
- Feedback from colleagues using video
The mindset
In our view, preparation for the CASC needs to begin even before the application form is submitted. Having a positive mindset will go a long way in enhancing the chances of success.5 It is therefore a must to believe in ones ability and dispel any negative cognition. Understandably, previous failure in the CASC can affect ones confidence, but a rational way forward would be to consider the failure as a means of experiential learning, a very valuable tool. Experiential learning for a particular person occurs when changes in judgments, feelings, knowledge or skills result from living through an event or events.6
Time required
Starting to prepare early is crucial as it gives time to analyse and make the required changes to the style of the interview. For instance, a good interview requires candidates to use an appropriate mixture of open and closed questions. Candidates who have been following this technique in daily practice will find it easier to replicate this in examination conditions when there is pressure to perform in limited time. However, candidates who need to incorporate this into their style will need time to change their method of interview.
Analysing areas for improvement
Candidates need to identify specific areas where work is needed to improve their interview technique. The best way to accomplish this is by an early analysis of their interview technique by a senior colleague, preferably a consultant who has examined candidates in the real CASC examination. We think its best to provide feedback using the Royal College’s ‘Areas of Concern’ - individual parameters used to provide structured feedback in the CASC. This will help to accustom oneself with the expectation in the actual examination.
Requesting more than one ‘examiner’ to provide feedback is useful as it can provide insight into ‘recurring mistakes’ which may have become habit. In addition, different examiners might provide feedback on various aspects of the interview style. The Calgary-Cambridge guide7, 8 is a collection of morethan 70 evidence-based communication process skills and is a vital guide to learn the basics of good communication skills.
Practice
We believe that it is important to practice in a group setting. Group work increases productivity and satisfaction.9 The aim of group practice is to interact with different peers which will help candidates to become accustomed to varying communication styles. Group practice is more productive when the group is dynamic so that novelty prevails. Practising with the same colleagues over a period of weeks carries the risk of perceiving a false sense of security. We feel this is because candidates get used to the style of other candidates and, after a period of time, may not recognise areas for improvement.
Another risk of a static group is candidates may not readily volunteer areas for improvement - either because they may feel they are offending the person or, more importantly, because the same point may have been discussed multiple times before! Whenever possible, an experienced ‘examiner’ may be asked to facilitate and provide feedback along the lines of ‘Areas of Concern’. However candidates need to be conscious of the pitfalls of group work and negative aspects such as poor decisions and conflicting information.
In addition to group practice, candidates would benefit immensely from individual sessions where consultants and senior trainees could observe their interview technique. Candidates could interview patients or colleagues willing to role-play. We have observed that professionals from other disciplines like nurses and social workers are often willing to help in this regard. Compared to group practice, this needs more effort and commitment to organise. Consultants, with their wealth of experience, would be able to suggest positive changes and even subtle shifts in communication styles which may be enough to make a difference. We found that video recording the sessions, and providing feedback using the video clips, helps candidates to identify errors and observe any progress made.
The feedback of trainees who appeared in the CASC examination included that attending CASC revision courses had helped them to prepare for the examination. It is beyond the remit of this article to discuss in detail about individual courses. The majority of courses employ actors to perform role-play and this experience is helpful in preparing for the CASC. Courses are variable in style, duration and cost. Candidates attending courses early in their preparation seem to benefit more as they have sufficient time to apply what they have learnt.
During the examination (table 2)
Table 2: Tips during the examination
Factor
Technique
- Reading the task
- Fast and effective reading
- Focus on all sub-tasks
- Time management
- ‘Wrap up’ in the final minute
- The golden minute
- Establish initial rapport
- Leaving the station
- Avoid ruminating on previous station
- Expecting a surprise
- Fluent conversation with empathy
Reading the task
Inadequate reading and/or understanding of the task leads to poor performance. Candidates have one minute preparation time in single stations and two minutes in linked stations. We have heard from many candidates who appeared in the examination that some tasks can have a long history of the patient. This requires fast and effective reading by using methods such as identifying words without focusing on each letter, not sounding out all words, skimming some parts of the passage and avoiding sub-vocalisation. It goes without stating that this needs practice.
CASC differs from the previous Part 1 OSCE exam in that it can test a skill in more depth. For example it may ask to demonstrate a test for focal deficit in cognition that may not be detected by conducting a superficial mini mental state examination.
Candidates need to ensure they understand what is expected of them before beginning the interview. In some stations, there are two or three sub-tasks. We believe that all parts of a task have a bearing on the marking.
An additional copy of the ‘Instruction to Candidate’ will be available within the cubicles. We suggest that when in doubt, candidates should refer to the task so that they don’t go off track. Referring to the task in a station will not attract negative marking but it is best done before initiating the interview.
Time management
It is crucial to manage time within the stations. A warning bell rings when one minute is left for the station to conclude. This can be used as a reference point to ‘wrap up’ the session. If the station is not smoothly concluded before the end of the final bell candidates may come across as unprofessional. Candidates also run the risk of losing valuable time to read the task for the next station.
Single stations last for seven minutes and linked stations last for ten minutes. Candidates who have practiced using strict timing are able to sense when the warning bell will ring. They are also able to use the final minute to close the session appropriately.
Having stressed the importance of finishing the stations on time, it is also vital to understand that an early finish can lead to an uncomfortable silence in the station. This may give the examiner the impression that the candidate did not cover the task. We feel that there will always be something more the candidate could have explored!
The awkward silence in the above scenario can potentially make the candidate feel anxious and ruminate on the station which must be avoided.
The golden minute
First impressions go a long way in any evaluation and the CASC is no different in this regard.10 Candidates need to open the interview in a confident and professional manner to be able to make a lasting impact and establish a better rapport. Observing peers, seniors and consultants interacting with patients is a good learning experience for candidates in this regard.
Candidates who do well are able to demonstrate their ability to gain the trust of the actors in this crucial passage of the interaction. Basic aspects such as a warm and polite greeting, making good eye contact, and clear introduction and explanation of the session will go a long way in establishing initial rapport which can be strengthened as the interview proceeds.
The first minute in a station is important as it sets the tone of the entire interaction. A confident start would certainly aid candidates in calming their nerves. Actors are also put at ease when they observe a doctor who looks and behaves in a calm and composed manner.
Leaving the station behind
Stations are individually marked in the CASC. Performance in one station has no bearing on the marking process in the following stations. It is therefore important not to ruminate about previous stations as this could have a detrimental effect on the performance in subsequent stations. The variety of tasks and scenarios in the CASC means that candidates need to remain fresh and alert. Individual perceptions of not having performed well in a particular station could be misleading as the examiner may have thought otherwise. Candidates need to remember that they will still be able to pass the examination even if they do not pass all stations.
Expecting a sorprise
Being mentally prepared to expect a new station is good to keep in mind while preparing and also on the day of the examination. Even if candidates are faced with a ‘surprise station’, it is unlikely that the station is completely unfamiliar to them. It is most likely that they have encountered a similar scenario in real life. Maintaining a calm and composed demeanour, coupled with a fluent conversation focused on empathy and rapport, will be the supporting tools to deal with a station of this kind.
Conclusion
The CASC is a new examination in psychiatry. It tests a range of complex skills and requires determined preparation and practice. A combination of good communication skills, time management and confident performance are the key tools to achieve success. We hope that the simple techniques mentioned in this paper will be useful in preparing for this important examination. Despite the falling pass rate, success in this format depends on a combination of practice and performance and is certainly achievable.
Problem based learning (PBL) has been an important development in health professions education in the latter part of the twentieth century. Since its inception at McMaster University1 (Canada), it has gradually evolved into an educational methodology being employed by many medical schools across the globe2,3. PBL presents a paradigm shift in medical education, with a move away from ‘teacher centered’ to ‘student centered’ educational focus. The assumptive difference between a pedagogy learner and an androgogy learner (Table 1) was summarised by Knowles4, and the androgogy approach underpins PBL. This shift has redefined the role of a teacher in the PBL era, from being a teacher to a facilitator.
Table 1: Differences between Androgogy and Pedogogy learner (Knowles)
Characteristics
Pedagogy
Androgogy
Concept of the learner
Dependent personality
Self-directed
Readiness to learn
Uniform by age-level & curriculum
Develops from life tasks & problems
Orientation to learning
Subject-centered
Task- or problem-centered
Motivation
By external rewards and punishment
By internal incentives curiosity
It is well known that implementing PBL as an educational methodology required additional resources compared to a traditional lecture based curricula5. In addition, there was a need to recruit and train a large number of tutors to facilitate the PBL process6.Training PBL tutors is an important component of a successful curriculum change, and is a continuous process. Training workshops and role plays were employed to train conventional teachers, but challenges were faced in developing them into effective PBL tutors5.
The aim of this paper is to evaluate the literature for any evidence supporting the theory that a PBL background student may develop into an effective PBL tutor. The Medline, EMBASE and CINHAL databases were searched to look for any pre-existing literature or research supporting this theory.
Results:
To the best of my knowledge, there has been no reported evidence supporting this theory. With limited literature evidence, this paper aims to identify common grounds between a PBL student and a PBL tutor, and whether being a PBL student may contribute to the overall development as a PBL tutor. The discussion evolves around the following domains:
1. Teaching Styles:
The ideal teaching style of a PBL tutor is a facilitative-collaborative style, which augments and supplements the PBL process. The teaching style inventory developed by Leung et al7 hypothesised four domains of teaching styles: the assertive, suggestive, collaborative and facilitative styles. Though a PBL tutor assumes himself in possessing this style (facilitative), it does not necessarily match with the students perceptions, as reported by Kassab et al8.
Some of the characteristics of being a PBL student may foster the development of a collaborative teaching style. Being a student, you are expected to be a collaborative learner which is critical for achieving and improving group performance9. Initial years as a student in PBL may contribute to developing attributes required to develop a preferential teaching style.
2. Facilitating critical thinking:
PBL is grounded in cognitive psychology and is set out to stimulate curiosity and build durable understanding. One of the roles of the tutor is to foster critical thinking and enhance the group’s ability to analyse and synthesise the given information. This attribute stems from the tutors ability to facilitate, rather than teach. Irby10 opined that clinical teachers tended to teach as they themselves were taught using traditional approaches, which may affect the process of stimulating critical thinking among the students.
A tutor from a PBL background would have the ability to think critically, through a process of developing thoughtful and well-structured approach to guide their choices11. Tiwari et al12 showed in their study that PBL students showed significantly greater improvement in critical thinking compared to traditionalist courses. Hence, prior exposure to a certain learning style can create a cognitive psychology that can contribute to tutor development.
3. Group dynamics:
One of the prime roles of a PBL tutor is to facilitate the PBL process by keeping the group focused on tasks, and guiding them to achieve their goals. Tutors who are skilled in group dynamics are evaluated more highly than those who are not so skilled11,13 . Tutors need to develop sound appreciation of the group dynamics, failing which may lead to fostering uncertainty with in the group. Bowman et al13 commented about the lack of consideration on the emotional implications placed on prospective PBL tutors when tutoring small groups, especially the skills required to balance between short term anxieties and potential serious problems. This imbalance which usually serves as unconscious incompetence may affect group dynamics.
PBL students would have experience of group dynamics and the pressures of working within it. They would have developed a model of working with members with varying attributes. Blighet al14 showed in their study that students from a PBL curriculum rated themselves better in team working and motivation compared to conventional course peers. This highlights the fact that an apprenticeship model may be necessary in developing the right skills to be an effective tutor.
The characteristics of a student that may foster ideal attributes in a PBL tutor are briefly summarised in Table 2, and has evolved from the work of Samy Azer9,11 .
Table 2: Common ground
Ideal PBL student
Ideals of a PBL tutor
Knows his role within a group
Would help in identifying different roles students may play
Knows to ask empowering questions
Would help in guiding groups in achieving learning objectives
Monitors his own progress by self evaluation and motivation
Would help in monitoring individual progress and motivate group
Bonds with other members to achieve goals
Would help in building trust and encourage bonding of group members
Develops thoughtful and well structured approach to guide choices
Would help in facilitating critical thinking
Fosters collaboration with other group members to create a climate of trust
Would facilitate collaborative teaching style
4. Tutor training
Considerable resources are exhausted in teaching new tutors the art of facilitating a PBL group6, and the usual cohort is teachers from a conventional taught background. The shift from didactic expertise to facilitated learning is difficult for those tutors who feel more secure in their expert role. Finucane et al5 published their study which showed that only a minority of staff had volunteered to be PBL tutors, possibly reflecting the fact that absence of prior exposure to PBL style of learning may have contributed to this. In spite of tutor training workshops, they could only retain 73% at the end of two years.
Prior exposure as a student may help negate much of the stigma associated with PBL. They would have observed and learnt from their PBL tutors, and would have analysed their contribution to the PBL process. They could reflect on their experience and evolve into an ideal PBL tutor. This would help in minimising resource expenditure and contribute towards retention of staff.
5. Tutor comfort zones:
PBL contextualises learning to practical situations, with integration across disciplinary boundaries. Dornan et al15 reported on how some teachers felt PBL to be a frustrating drain on time as it did not fit their educational style, and was a distraction from clinical teaching, demonstrating the ‘conditioning effect’ of prior experiences. This further fuels the debate between content vs. process expertise, but prior knowledge of the process would benefit the students and the PBL process.
6. Role modeling:
Role models have long been regarded as important for inculcating the correct attitudes and behaviors in medical students. Being an ideal role model is considered as one of the prime requisites of a teacher. In a recent study, Mclean et al16 showed that PBL students tended to have a higher percentage of role models compared to students from a traditional programme (73% vs. 64%). In an ideal setting, a “content and process expert” would be the perfect role model for the PBL students, but this may not be realised in all settings.
Paice et al17 commented on the resistance to change within the medical profession, and highlighted the need for training to emphasise the values and attitudes required. This puts an added emphasis on the tutor to demonstrate tenacity and virtues to be an effective role model, avoiding ‘cognitive clouding’ from previous experiences.
As a PBL student, they would be exposed to variety of PBL tutors. They would have incorporated the good points of an effective PBL tutor, and would have reflected on the negative aspects. Reflective practice enables them to develop the right attributes. Though these attributes may be difficult to develop through training workshops, having a background of PBL education may help mould the tutor characteristics.
Conclusion:
As PBL continues to be employed across different specialties, there would be increased emphasis on the medical schools to match the resources needed to implement it. There is an argument for developing an apprenticeship model or recruiting tutors from PBL background, which would help in reducing the cost in training new tutors, along with nullifying the negative influences a new tutor may bring. The biggest limitation in the present setting is finding a cohort of PBL background tutors, but an apprenticeship model may benefit teachers from conventional background. A prospective research study exploring the attributes of tutors, successful and less successful, from traditional, PBL and hybrid curricula and those who have crossed the Rubicon from traditional to PBL can effectively answer this question.
Oesophageal compression by a vascular structure resulting in dysphagia is uncommon. There have been multiple reports in the medical literature of almost every major vascular structure in the chest causing some degree of oesophageal compression and subsequent dysphagia(1, 2). In 1794, David Bayford, described a case of a 62 year-old lady having dysphagia due to an aberrant right subclavian artery. He coined the term “dysphagia lusus naturae”, which means “Freak of nature” (3).
Pill-induced esophagitis is associated with the ingestion of many pills and presents an uncommon cause of erosive oesophagitis (4). Multiple different classes of drugs have been described to be hazardous to the oesophageal mucosa and cause pill-induced oesophagitis (5, 6). Although uncommon in itself, dysphagia lusoria has not been described in literature to present as pill-induced oesophagitis. We present the first case of dysphagia lusoria causing pill-induced oesophagi's by testosterone pills in a young healthy man. Pubmed review of the English medical literature has been conducted to discuss the epidemiology, pathogenesis and management of this uncommon disorder.
Case Presentation:
A 26 year-old man with no significant past medical history presented with 5 days of dysphagia (for both solids and liquids), odynophagia and retrosternal chest discomfort. He admitted to having occasional difficulty swallowing for past 2-3 months for solids only. He denied any heartburn, cough, regurgitation, loss of appetite, weight loss, fever, chills, haematemesis or melaena. He denied tobacco or alcohol use. 2 weeks prior to presentation he had started taking testosterone pills for body-building. Physical examination was completely unremarkable.
A barium oesophagram showed extrinsic oblique compression of the oesophagus at the level of the carina as it passes through the aorta. CT scan and MRI of the chest revealed a right-sided aortic arch crossing posteriorly to the oesophagus with proximal oesophageal dilatation consistent with Dysphagia Lusoria. Endoscopy noted erosive oesophagitis/distinct ulceration extending from 18cm into a pulsatile area of narrowing at 20 cm with normal mucosa visualized distally.
Biopsies revealed oesophageal squamous mucosa with marked acute inflammation, reactive changes and no evidence of viral inclusions. Surgical management was discussed with the patient, but given the short duration of symptoms and the patient's stable weight, providing symptomatic relief with lifestyle changes, together with a trial of medications such as proton pump inhibitors were considered. At 2 weeks follow-up whilst taking proton pump inhibitors and having discontinued the testosterone pills, the patient experienced complete resolution of symptoms.
Epidemiology:
Dysphagia Lusoria: Moltz et al, found that lusorian artery has a prevalence of about 0.7% in the general population, based on the post-mortem findings(7). Also, out of 1629 patients who underwent endoscopy for various reasons, 0.4% had a finding of a lusorian artery in a report from Fockens et al(8). It has also been concluded based on the autopsy results and retrospective analysis of patients’ symptoms during life that about 60-70% of these patients remain asymptomatic (7). Coughing, dysphagia, thoracic pain, syncope and Horner's syndrome may develop, but usually present in old-age(9).
Pill-Induced Oesophagitis: The data on pill-induced oesophagitis is rather limited. A Swedish study found an incidence of 4 cases per 100,000 population/year(10). Wright found the incidence of drug-induced oesophageal injury to be 3.9/100,000(11). This may be underestimated and does not include subclinical or misdiagnosed cases. Also, cases are reported selectively, due to clustering of cases, newly implicated pills or rare complications. The incidence today is probably much higher due to increased use of prescription medications, widespread use of endoscopy and an ageing population. All these factors limit our ability to correctly assess the true epidemiology of this iatrogenic disorder.
Pathogenesis
Dysphagia Lusoria: During embryological development, the aortic sac gives rise to six aortic arches and with further development the arterial pattern is modified and the fourth arch persists on both sides and some vessels regress. In right arch anomaly, the left arch atrophies and disappears whereas the right arch persists. If both arches persist, they form a double arch or a vascular ring encircling the trachea and oesophagus (12).
Pill-induced oesophagitis: Several lines of evidence confirm that oesophageal mucosal injury is caused by prolonged contact with the drug contents(13, 14) .On clinical grounds, patients frequently report a sensation of a pill stuck in the oesophagus before the development of symptoms and the frequent occurrence of symptoms after improper pill ingestion. Endoscopically, the evidence includes occasional observation of pill fragments at the site of injury, sharp demarcation of the injury site from the normal tissue and the frequent localization of the injury to the areas of oesophageal hypomotility or anatomic narrowing(4, 15). Therefore factors predisposing to the drug-induced oesophageal injury can be divided into two main categories: patient or oesophageal factors (16, 17, 18,19,20,21) and drug or pharmaceutical factors as shown in tables 1 and 2 (13,14,22,23,24).
Table 1: Patient/Esophageal facors for pill-induced esophagitis
Old Age
Decreased Salivation
Pill intake in recumbent position
Lack of adequate fluid intake with the drug
Structural abnormalities of esophagus
Hypomobility of the esophagus
Table 2: Drug related factors for pill-induced esophagitis
Chemical structure(sustained release pills, gelatinous surface)
Formal structure (capsule increases risk over the tablet)
Solubility
Simultaneous administration of multiple medications
It is important to note that most patients who experience pill-induced damage have no antecedent oesophageal disorder, neither obstructive nor neuromuscular (25). It is the combination of anatomic narrowing coupled with the caustic effects of the implicated drug that caused the oesophageal injury in our case. Although testosterone pills have never been reported to cause pill-induced oesophagitis, 6 cases of corticosteroid-induced oesophagitis have been described in the literature(26) .
Clinical Presentation
Dysphagia Lusoria: As previously mentioned the disorder remains asymptomatic in majority of the patients. Symptomatic adults usually present with dysphagia for solids, (91%), chest pain (20% or less). Less commonly, patients may have cough, thoracic pain or Horner’s syndrome (27,28). In infants, respiratory symptoms are the most predominant mode of clinical presentation. This is believed to be due to absence of tracheal rigidity, allowing for its compression with resulting stridor, wheezing, cyanosis etc (9) . Richter et al. reported average age of presentation to be 48 years (27). Various mechanisms to explain this delayed presentation have been proposed such as increased rigidity of the oesophagus, rigidity of the vessel wall due to atherosclerosis, aneurysm formation (especially Kommerell’s diverticulum), elongation of the aorta etc (9, 29,30) .
Pill-induced oesophagitis: Patients with pill-induced injury usually present with odynophagia, dysphagia and/or retrosternal chest pain(4) . Symptoms can occur after several days after starting a drug, but frequently occur after the first dose. (13) .Fever and haematemesis signifying a possible mediastinal extension can occur without chest pain(32,33). Pharyngitis due to the pill lodged in the hypopharynx has been reported(34).
Our case presents a typical example of asymptomatic Dysphagia Lusoria, who developed acute dysphagia, odynophagia and retrosternal chest discomfort immediately after the initiation of the offending agent; which is very typical of pill-induced oesophageal injury.
Diagnostic approach
Dysphagia Lusoria: The best method to diagnose an aberrant right subclavian artery presenting with difficulty swallowing is initially with a barium oesophagram followed by a CT or MRI scan. (27) .Angiography although considered gold standard for the diagnosis of vascular abnormalities is now largely supplanted by newer less invasive techniques such as CT or MR angiography. Upper endoscopy may reveal a pulsating compression of the posterior wall of the oesophagus as in our case(9, 27). Endoscopic ultrasound, especially with Doppler technology may be helpful to confirm the vascular nature of the abnormality(27). Oesophageal manometry usually shows non-specific findings. High peristaltic pressures have been reported in the proximal oesophagus above the level of the compression (9, 35).
Pill-induced oesophagitis: Barium studies can be normal, and slowing of barium column may be the only abnormality seen(31). Double contrast studies may however, increase the yield of a positive result(36). Kikendall et al. reported that endoscopy revealed the evidence of injury in all the patients (5) . Endoscopy most commonly reveals one or more discrete well demarcated ulcers with normal surrounding mucosa. Ulcers may range from pin-point to several centimetres in diameter(5). Biopsies, if performed, help to distinguish the condition from infection and neoplasia.
Our case shows a distinct oblique compression in the posterior wall of oesophagus on the barium study (fig 1) and classic findings on MR/CT with contrast which also excluded any other thoracic vascular abnormalities (fig 2-5). Endoscopic images of a shallow ulcer are shown in fig 6,7.
Figure 1- Barium esophagram depicting the extrinsic indentation of the esophagus as it crosses the aorta.
Figure 2- CT scan of the thorax demonstrating esophageal compression from a posteriorly placed aorta.
Figure 3- Magnetic resonance image showing esophageal compression with proximally dilated esophagus.
Figure 4- CT image showing the right sided origin of the aortic arch.
Figure 5- Three Dimensional image of the heart and the right sided approach of the arch.
Figure 6- Endoscopic view of esophageal inflammation at the site of compression.
Figure 7- Endoscopic image of the esophageal injury using narrow band imaging.
Treatment
Dysphagia Lusoria: The treatment of patients with DL primarily depends upon the severity of symptoms. Mild to moderate cases are managed by lifestyle and dietary changes such as eating slower, chewing well, sipping liquids, weight reduction and reassurance as in our case.(9,27) Janssen et al also reported in a series of 6 patients that 3/6 improved with proton-pump inhibitor alone or in combination with the prokinetic drug cisapride (9) .Severe symptoms and failure of medical therapy may need surgical evaluation and treatment. Richter et al. reported 14/24 patients who underwent surgical repair of the aberrant vessel for DL(27) . Bogliolo et al proposed endoscopic dilation as a temporary alternative to relieve symptoms in patients who are poor surgical candidates (37) .
Pill-induced oesophagitis: Most uncomplicated cases of pill-induced oesophagitis may heal spontaneously, with resolution of symptoms in a few days to a few weeks. Withdrawal of the offending drug and avoidance of topically irritating foods such as citrus fruits, alcohol is imperative to aid healing (4, 13). Sucralfate, topical anaesthetics, and acid suppression are often used to aid in relief of pain(4, 15). Rarely, in severe cases, parenteral nutrition or endoscopic dilation of chronic strictures may be required. (15, 25)
Conclusion
Our case demonstrates a typical case of DL presenting with pill-induced oesophagitis who responded to conservative and acid suppressive therapy. Identifying the risk factors and adequate patient education is the key to prevention.
Cosmetic outcome following breast-conserving surgery depends on various factors including location of the tumour, weight of the specimen excised, number of surgical procedures, volume of breast, length of scar and postoperative adjuvant treatment1. The best method of cosmetic assessment following breast-conserving surgery is still unclear. However various objective and subjective methods in combination are known to give a good assessment of cosmesis2, 3, 4. It has been shown that photographic assessment is as effective as live assessment in the post-surgical setting5. Methods to assess cosmesis following breast-conserving surgery are varied and more recently computer software are being used to assess cosmesis following breast-conserving surgery.
The aim of this study was to compare three different methods of cosmetic assessment following breast-conserving surgery and to assess the influence of various factors on final cosmetic outcome.
Methods:
One hundred and fifteen patients underwent breast-conserving surgery for carcinoma of breast by wide local excision and level 2 axillary clearance. Following wide local excision, cavity shavings were taken to ensure adequate local excision. Breast drainage was not used but suction drains were used routinely following axillary clearance. All patients received adjuvant breast radiotherapy (46 Gy, 23 fractions with a cavity boost of 12 Gy in 4 fractions) administered over a period of 6 weeks.
Figure-1: Measurement of Breast Retraction Assessment6 (reprinted with permission from Elsevier, ref 6 (page 670), copyright 1999)
Digital photographs were taken at one year in three views; frontal with arm by the side, frontal and oblique with arm abducted to 90 degrees. The photographs were used for subjective and objective assessment of cosmesis. The objective assessment of cosmesis was carried out using Breast Retraction Assessment (BRA) and Nipple Deviation (ND). BRA was calculated as indicated in figure 16. ND was calculated as a percentage difference from suprasternal notch to nipple on normal side compared with the operated side. BRA and ND were then categorised into three groups; BRA: (excellent to good <3.1 cm, fair 3.1-6.5, poor >6.5), ND: (difference of <5% - excellent to good, 5-10% fair and >10% poor). Subjective assessment was carried out using a panel consisting of a Consultant Breast Surgeon, Research Fellow, Secretary, Breast Care Nurse and Nurse Practitioner with each scoring independently. The method described by Harris et al7 with a score of 9-10 for excellent (no visible difference between two breasts), good (slight difference; score 7-8), fair (obvious difference but no major distortion; score 4-6) and poor (major distortion; score <4) was used to categorise patients.
Figure- 2: Measurement of breast volume (Sloane method) Formula for calculation of breast volume: 1/3 π r2h (reprinted with kind permission from Sloane project)
The volume of breast tissue excised was estimated with the length (L), width (W) and height (H) of the excised tissue specimen and the cavity shave measured by the pathologist and using the formulas for a prolate ellipse (V= 0.52* L* W* H); this was added on to the volume of cavity shave calculated using the formula 0.79* L* W* H. The total breast volume was estimated using the mammogram and applying the formula (1/3 πr2h) as shown in figure-2. Based on these measurements the percentage breast volume excised was calculated and compared with cosmetic outcome.
Statistics:
Multirater kappa statistics8 were used to assess inter-observer agreement between five different members of the panel and also to test agreement between the three different methods for assessing cosmesis. The average value given by the panel was used and categories good and excellent were combined in order to compare the three methods of cosmetic assessment. A kappa statistic of less than or equal to 0.20 was considered to demonstrate poor agreement, 0.21 to 0.40 fair agreement, 0.41 to 0.60 moderate agreement, 0.61 to 0.80 good agreement and 0.81-1.00 very good agreement9.
The effect of the percentage volume of the breast tissue excised and the tumour size on the three methods of cosmetic assessment was examined using where appropriate a Jonckhneere-Terpstra test for trend, a Kruskal Wallis test or a Mann-Whitney U test. The effect of the number of breast operations performed and the location of the tumour were assessed using Chi-square test or Fisher’s exact test when appropriate.
Results:
Of the 115 patients assessed using panel assessment 64 (56%) scored good to excellent, 39 (34%) scored fair and 12 (10%) scored poor. ND scored 50(43%) as good to excellent, 32 (28%) as fair and 33 (29%) as poor. Using BRA, the scores were 76 (66%), 38 (33%) and 1(1%) respectively. These results are shown graphically in figure-3.
Figure- 3: Number of patients classified into each of the three categories poor, fair and good/excellent for the three methods bra, nipple deviation and panel assessment. BRA= breast retraction assessment; Panel= assessment by different panel members; ND= nipple deviation
Taking the mean scores for these three methods of assessment and dichotomising the results into two categories of good to excellent and poor to fair, 52% of patients in this study had good to excellent cosmetic result and 48% were categorised as fair to poor cosmetic result. The Kappa statistic was calculated on 115 patients for the three methods of assessment and it was found to have a value of –0.23 (95% CI (–0.35, – 0.11) which falls within the poor agreement category.
Figure- 4: Comparison of panel assessment by different panel members. Pa, Ph, Pg, Pk and Pd= Codes for the different panel members
Examining the panel assessment using the kappa statistics for the 115 patients assessed there was moderate agreement between the panel members (Kappa statistic of 0.42; 95% confidence interval of (0.37, 0.47). This suggests there is moderate chance that the panel members will categorise each patient the same way. If one plots the panel assessment graphically one can see that excellent is used least by all and fair most frequently (figure 4).
Factors affecting cosmesis:
1) Percentage breast volume excised
Figure -5: Effect of percentage breast volume excised on cosmetic outcome using Panel assessment, BRA and ND
For panel assessment it appears that removal of a larger percentage volume gives a poor cosmetic result and a smaller percentage volume an excellent/good result (figure 5) as would be expected clinically. This is supported by a Jonckhneere-Terpstra test for trend (=0.01). Using ND median percentage volumes across the groups did not appear to differ (χ2=1.05 p=0.59, Kruskal Wallis test). However, for BRA, only one patient was classified as poor and no difference was seen between those with fair and good/excellent results (U=477, p=0.34). The median volume excised for different cosmetic outcome using the three methods is shown in table 1.
Table-1: Medians volumes for the three measurements.
Panel assessment
BRA
Nipple deviation
Poor
157.56
(only 1 poor value)
100.61
Fair
88.58
93.11
55.96
Good/Excellent
68.33
76.55
81.33
BRA= breast retraction assessment
BRA= breast retraction assessment
The percentage breast volume excised was then compared with cosmetic outcome using the three methods of assessment. As shown in table 2, 45-65% of patients with <10% estimated breast volume excised had good to excellent cosmetic result compared with 35-50% good to excellent result if >10% breast volume was excised.
2) Tumour location:
Tumour location was divided into inner or outer quadrants of the breast. The distribution of tumours in the breast and the cosmetic outcome with each of the three methods of assessment is shown in table 3. The location of tumour within the breast was not significantly associated with cosmetic outcome (χ2 =1.86, p=0.39 for panel assessment), (p=0.23, Fisher’s exact test for BRA) and (χ2 =0.21, p=0.90 for ND).
Table-2: Estimated percentage breast volume excised and cosmetic outcome
< 10% breast volume excised
> 10% breast volume excised
Panel Assessment
Good to excellent (%)
32 (65)
7 (35)
Fair (%)
15 (31)
6 (30)
Poor (%)
2 (4)
7 (35)
Breast Retraction Assessment
Good to excellent (%)
32 (65)
10 (50)
Fair (%)
16 (33)
10 (50)
Poor (%)
1 (2)
0
Nipple Deviation
Good to excellent (%)
22 (45)
8 (40)
Fair (%)
15 (31)
4 (20)
Poor (%)
12 (24)
8 (40)
3) Number of breast operations:
The influence of number of operations (1 vs 2) was examined for each of the three methods of assessment. Using BRA and Panel assessment there was no significant difference in the cosmetic outcome for patients who underwent one or two operations ( p=0.70 for panel assessment), (p=0.99, Fisher’s exact test for BRA). For ND there does appear to be a larger proportion in the poor group for those with two operations (p =0.30 Fisher’s exact test for ND). This is illustrated in Table 3.
Table-3: Factors affecting cosmesis
Table-3: Factors affecting cosmesis
Panel
BRA
ND
Percentage volume excised
Poor (median (IQR))
Fair (median (IQR))
Good/Excellent (median (IQR))
13.8 (11.0,16.5)
8.4 (4.4,10.4)
5.8 (3.9,8.0)
-
8.0 (4.6,11.6)
6.9 (4.3,10.1)
8.5 (5.1,11.4)
5.8 (3.9,9.4)
7.2 (4.4,11.0)
Location
Poor (outer (n), inner (n))
Fair (outer (n), inner (n))
Good/Excellent (outer (n), inner (n))
8, 2
22,8
47,8
0,1
26,5
51,12
9,1
23,6
33,8
No. of Operations
Poor (One (n), Two (n))
Fair (One (n), Two (n))
Good/Excellent (One (n), Two (n))
9,1
24,6
48,8
1,0
26,5
54,10
20,5
27,2
34,8
Tumour size (mm)
Poor (median (IQR)
Fair (median (IQR)
Good/ Excellent (median (IQR)
12 (9, 15)
11 (9, 19)
12 (7, 15)
-
11 (8,15)
12 (7, 15)
12 (10, 15)
12 (8, 16)
9 (6,14)
Panel= panel assessment; BRA= breast retraction assessment; ND= nipple deviation; IQR= inter quartile range
Panel= panel assessment; BRA= breast retraction assessment; ND= nipple deviation; IQR= inter quartile range
4) Tumour size:
Table 3 shows the median tumour size and interquartile range for the three categories, good/ excellent, fair and poor and one can see that there is no significant difference in tumour size for these categories using panel assessment (Jonckheere-Terpstra p=0.31) or BRA (U =873, p=0.55). However, using ND there was evidence to suggest that large tumour size resulted in poor outcome (Jonckheere-Terpstra, p=0.04).
Thus, tumour size had a significant influence on the cosmetic outcome when ND was used as the method of assessment.
Discussion:
Cosmetic outcome following breast-conserving surgery is assessed using a combination of subjective and objective methods. The subjective method uses a panel of members from different backgrounds to assess overall cosmesis. However, Pezner et al10 showed relatively low level of agreement between observers when a four-point scale was used for assessment of overall cosmesis. The objective methods, which mainly compare the position of the nipple, are easy to reproduce but do not take into account skin changes and give poor assessment of cosmesis for lower quadrant tumours.
In this study the cosmetic outcome was assessed in 115 patients one year post-operatively. The mean cosmetic result using the three different methods of assessment was good to excellent in 55% of the patients, which compares favourably with other studies reported in the literature2, 4. Looking at inter-observer variation for the panel assessment, moderate agreement was found between different panel members. This compares favourably with an earlier study that looked at cosmetic outcome in the EORTC trial 22881/108826. However, when the three methods of cosmetic assessment were compared with each using kappa statistic there was poor concordance. Although some agreement was noted, this was likely to be due to chance as the kappa statistic was low. It is difficult to explain this finding as other authors1, 6 have reported moderate to good agreement between subjective and objective methods. One explanation for this lack of agreement is that each method assesses a different aspect of cosmesis.
The two objective methods of cosmetic assessment (BRA and ND) that are used to assess upward retraction of nipple have been found to be a very good determinant of cosmetic outcome and are easy to reproduce according to Fujishiro et al11. Furthermore, evaluation of nipple position has also been shown to be moderately representative of overall cosmetic result6. BRA is a two dimensional measurement of nipple position and some cosmetic factors such as volume, shape or skin changes cannot be accurately assessed11. This is probably the reason why BRA shows a better cosmetic outcome when compared with subjective assessment by panel members. In this study only one (1%) patient was deemed to have a poor cosmetic outcome using BRA compared with 12 (10%) using panel assessment.
A criticism of the current study is that patients’ perceptions of their own cosmetic outcome were not assessed. Previous studies have shown a significant correlation between patient satisfaction after breast-conserving surgery and their self-assessment of cosmesis12, 13. This study shows that there is need to find a reproducible method of cosmetic assessment which takes into account all the limitations of the methods currently used. More recently computer software like BCCT.core and Breast Analysing Tool have been developed and early results using these software are promising14, 15. There are various factors that are known to affect cosmesis following breast-conserving surgery. As expected larger percentage volume of excised breast tissue was associated with poorer cosmetic result. This was particularly evident from panel assessment. Such a relationship was less clear with BRA and ND. The effect of percentage volume of breast tissue excised and the outcome is consistent with a recent report that showed higher patient satisfaction if estimated percentage breast volume excised was < 10%16. Cosmetic outcome based on tumour location varies depending on the method of assessment used. BRA is adversely affected by tumour in the upper and outer quadrants of the breast, suggesting that surgery causes larger nipple deviation in this quadrant, while panel assessment gives poor scores for tumours located in inferior quadrant2, 11. In this study only 19% of patients had tumours located in the inner quadrant and the small number may explain why, no significant difference in cosmetic outcome was found. Tumour location or the number of operations performed did not appear to affect the cosmetic outcome in this study. The volume of breast tissue excised depends on tumour size. Since the majority of tumours in this study were small, the size of the tumour did not affect cosmetic outcome except when nipple deviation was used. This once again indicates that these three methods of assessment may be looking at different aspects of cosmesis.
In conclusion, cosmetic outcome following breast-conserving surgery is an important, measurable end point. However, the best method of assessment of cosmesis has not been devised17. Although, the objective methods are easier to apply and reproduce, they do not give a good assessment of global cosmetic results. Panel Assessment however, does appear to provide concordant results between different observers and may be a useful, simple measure of cosmetic assessment following breast-conserving surgery.
Bisphosphonates, which have been on the market for roughly a decade, have raised safety concerns in the past. Several case series and multiple individual case reports suggest that some subtrochanteric and femoral shaft fractures may occur in patients who have been treated with long-term bisphosphonates. Several unique clinical and radiographic features are emerging. Recent media spotlight in the United States (US), implying that long-term use of alendronate could cause spontaneous femur fractures in some women, has reignited the debate about the safety of bisphosphonates. The question posed: is the risk of bisphosphonate-associated fractures so great that treatment should be stopped?
Postmenopausal women with osteoporosis are commonly treated with the bisphosphonate class of medications, one of the most frequently prescribed medications in the US. While alendronate therapy has been shown to decrease the risk of vertebral and femoral neck fractures in postmenopausal osteoporotic patients, recent reports have associated long-term alendronate therapy with low-energy subtrochanteric and diaphyseal femoral fractures in a number of patients. In the past four years reports have been published implying that long-term bisphosphonate therapy could be linked to atraumatic femoral diaphyseal fractures.1, 2 According to two studies reported recently at the American Association of Orthopedic Surgeons 2010 Annual Meeting, an unusual type of bone fracture has been reported in women who have taken bisphosphonates for osteopenia and osteoporosis for more than four years.3, 4 The first report was published in 2005. Odvina et al5 reported on nine patients who sustained atypical fractures, including some with delayed healing, while receiving alendronate therapy. These authors raised the concern that long-term bisphosphonate therapy may lead to over-suppression of bone remodelling, an impaired ability to repair skeletal microfractures, and increased skeletal fragility. There have been other reports of "peculiar" fractures - i.e. low-energy femur fractures that are typically transverse or slightly oblique, diaphyseal, or subtrochanteric, with thickened cortices and a unicortical beak - in patients who have been on long-term bisphosphonate treatment.1-4, 6
In a small prospective study, Lane et al3 obtained bone biopsies from the lateral femurs of 21 postmenopausal women with femoral fractures. Twelve of the women had been on bisphosphonate therapy for an average duration of 8.5 years, and nine had no history of bisphosphonate use. They found that the heterogeneities of the mineral/matrix ratio were significantly reduced in the bisphosphonate group by 28%, and the crystallinity of the bone was significantly reduced by 33% (p < 0.05). The authors concluded that this suggested suppression of bone turnover, resulting in a loss of heterogeneity of the tissue properties, which may be a contributing factor to the risk of atypical fractures that we are starting to see. It is believed that long-term alendronate administration may inhibit normal repair of microdamage arising from severe suppression of bone turnover (SSBT), which, in turn, results in accumulation of microdamage. This process would lead to brittle bone and the occurrence of unexpected stress fractures, characteristically at the subtrochanter of femur. The typical presentation of these fractures consist of prodromal pain in the affected leg and/or a discrete cortical thickening on the lateral side of the femur in conventional radiological examination or the presentation with a spontaneous transverse subtrochanteric femur with typical features. The morbidity of atypical femoral fractures, particularly when bilateral, is high. Surgical intervention is generally required and healing may not be achieved for several years. Despite the lack of conclusive evidence of a causal relationship with bisphosphonate therapy, the current consensus is that treatment should be discontinued in patients who develop these fractures. In view of the high frequency of bilateral involvement, imaging of the contralateral femoral shaft with X-rays, MRI, or an isotope bone scan should be performed. MRI and bone scanning havegreater sensitivity than radiography for an incipient stressfracture. If lateral cortical thickening and/or an incipient stress fracture is seen, prophylactic surgical fixation should be considered. Suppressed bone formation in these patients provides a possible rationale for the use of anabolic skeletal agents, such as parathyroid hormone peptides, but at the present time the efficacy of this approach remains to be established. Parathyroid hormone not only has activated bone-formation markersin trials in humans but has also enhanced the healing of fracturesin studies in animals.
The question of whether these fractures are causally linked to bisphosphonate therapy is widely debated but as yet unresolved. Consequences of long-term suppression of bone turnover include increased mineralization of bone, alterations in the composition of its mineral/matrix composite and increased micro damage, all of which may reduce bone strength. Whilst these lend biological plausibility to a causal association, however, they do not constitute direct evidence. The bilateral fractures seen in many patients corroborate the suspicion that patients with bisphosphonate-associated stress fractures carry some other risk factor in addition to taking the drug. Microfractures,inadequate mineralization, and outdated collagen are some of the candidate causes. However, until further studies can provide definitive evidenceof bisphosphonate-associated fractures, it is premature to attributeatypical fractures to over-suppression of bone turnover alone,while disregarding secondary and patient-related factors. Many experts believe that prolonged suppression of bone remodelling with alendronate may be associated with a new form of insufficiency fracture of the femur. Studies have not shown if the entire class of medications produce a similar result, but patients who have been treated with any bisphosphonate for an extended period of time should be considered at risk.
A wealth of information from well-designed clinical trials clearly shows that, as a class, bisphosphonates are highly effective at limiting the loss of bone mass, deterioration of bone micro architecture, and increased fracture risk that occur with aging. The benefit/risk ratio of bisphosphonate therapy in patients at high risk of fracture remains overwhelmingly positive because of the very low incidence of atypical femoral fractures. Current estimates suggest that alendronate prevents 200 clinical fractures if 4000 women are treated over three years and will cause one femur fracture over the same course of time.7 A study by Schilcher et al8 found that the incidence density of a stress fracture for a patient on bisphosphonate was 1/1000 per year (95% CI: 0.3-2), which is acceptable considering that bisphosphonate treatment is likely to reduce the incidence density of any fracture by 15/1000.9 Nevertheless, limitation of treatment duration to five years in the first instance, with evaluation of the need to continue therapy thereafter, may be appropriate in clinical practice. The Fracture Intervention Trial Long-term Extension (FLEX), in which postmenopausal women who had received alendronate therapy for five years were randomised to continue receiving alendronate for five additional years or switched to placebo, provided clinical evidence that the effect of bisphosphonate therapy was maintained after discontinuation of therapy.7, 10 Women who are being treated with bisphosphonates should take a drug holiday if they have been on them for five years. Patients in whom bisphosphonate therapy is discontinued should typically follow up with bone mineral density measurements at 1- to 2-year intervals, with some experts advocating periodic measurement of biochemical markers of bone turnover to detect loss of the antiresorptive effect. Additional research is necessary to determine the exact correlation between the use of bisphosphonates and spontaneous or low-energy trauma fractures.
Professor Elisabeth Paice is currently on secondment to NHS London having been appointed to the new post of Acting Director of Medical and Dental Education from her role as Dean Director at London Deanery. The new role will ensure that the right number of doctors and dentists have the right training to deliver the service ambitions outlined in Healthcare for London. Elisabeth will be leading on the Medical and Dental Education Commissioning System (MDECS). This is the name of the programme of work that will manage the changes to postgraduate medical and dental training.
She was born in Washington DC, brought up in Canada, and studied medicine first at Trinity College Dublin and later at Westminster Medical School. She was the originator of the 'Hospital at Night' concept; developed the 'Point of View Surveys'; chaired PMETB working parties on Generic Standards and the National Trainee Survey and has published variously including on doctors in difficulty; workplace bullying; women in medicine. She was Chair of COPMeD, Conference of Postgraduate Medical Deans, from July 2006 to July 2008.
How long have you been working in your speciality?
I have been a full-time postgraduate dean since 1995. Before that I was a consultant rheumatologist for 13 years.
Which aspect of your work do you find most satisfying?
I get great satisfaction out of developing and implementing new ideas, especially when they work well enough to be taken up by others. I think most doctors have a creative streak and sometimes bureaucracy can damp this down. One of the reasons why medical education and training is so enjoyable is that it has to keep changing because of changes in the way the service is developing. There are standards to be met, of course, and regulators to satisfy, but within those constraints there is plenty of room for innovation. The better the quality of education and training, the better and safer the care of patients.
What achievements are you most proud of in your medical career?
As Dean Director of London, I have been very proud to lead postgraduate medical and dental education in one of the world’s great cities, with its five world-renowned medical schools, numerous centres of clinical excellence, and over 10,000 trainees. In order to understand trainees’ views, I introduced a regular survey through which they could voice their views about the quality of training they were receiving. I was very pleased when this formed the basis of the very successful National Trainee Doctor Survey, now embarking on its fourth iteration. This survey has enabled postgraduate deans across the UK to identify departments where training is not meeting the minimum standards for training and to take appropriate action.
Other achievements of which I am proud include the development of a multiprofessional team-based approach to out of hours services, known as the Hospital at Night initiative, which has improved patient safety while providing a solution for reducing the hours of junior doctors. Most recently I am delighted with the success of London’s Simulation and Technology-enhanced Learning Initiative (STeLI) which recently won the prestigious Health Service Journal Award for Patient Safety.
Which part of your job do you enjoy the least?
I least enjoy dealing with performance issues, whether internal to my staff or among trainees or their trainers.
What are your views about the current status of medical training in your country and what do you think needs to change?
Medical education is recognized in the UK as being a vital factor in providing the high quality doctors necessary for a high quality health service. It needs to be better resourced, and in particular every doctor with responsibility for educational supervision needs to have the training, the time, and the tools to do a good job. The way in which training has traditionally taken place, known as the ‘apprenticeship model’, is no longer suitable because of restrictions on the hours of work. I am all in favour of these restrictions, because long hours have a negative impact on learning and pose a risk to the health and safety of both doctors and patients. But we need radical change in the way we depend on doctors in training to provide out of hours cover and we need to find robust ways to ensure they gain the practical experience they need.
How would you encourage more medical students into entering your speciality?
I would strongly encourage any medical student to consider taking an interest in medical education from the start. Whatever the field of medicine that they enter, there will inevitably be an expectation that they will teach the next generation of doctors and of other healthcare professionals. Teaching is increasingly being recognized as one of the duties of a doctor, and like anything else, the more effort you put in, the more rewarding the outcomes.
What qualities do you think a good trainee should possess?
Trainees need to have a solid grounding in the basic sciences, because it is the foundation on which their postgraduate training will build. They need to be both conscientious and curious, doing what is required of them, but also going the extra mile in the search for knowledge. They should be motivated by the desire to make a positive difference to the lives of others, because I believe that is the only motivation that stands the test of time.
What is the most important advice you could offer to a new trainee?
Read the curriculum, establish what is expected of you and what you can expect from your seniors and your team, and engage with the educational programme.
What qualities do you think a good trainer should possess?
Kindness, honesty, expertise - and a passion for developing these qualities in their juniors.
Do you think doctors are over-regulated compared with other professions?
No, it is a profession in which we can potentially harm others, regulation is a necessity.
Is there any aspect of current health policies in your country that are de-professionalising doctors? If yes what should be done to counter this trend? The responsibility for the professionalism of a doctor lies with the doctor. There are no policies in the UK that de-professionalise doctors.
Which scientific paper/publication has influenced you the most? I have been heavily influenced by the body of work by Charles Czeisler in the USA and Philippa Gander in New Zealand about the impact of long hours and sleep deprivation on health, safety, errors and retention of learning of doctors in training.
What single area of medical research in your speciality should be given priority? Simulation technology.
What is the most challenging area in your speciality that needs further development? Fitting adequate training into a 48 hour week without lengthening the duration of training
Which changes would substantially improve the quality of healthcare in your country?
Improving the training of general practitioners
Do you think doctors can make a valuable contribution to healthcare management? If so how?
All doctors need to learn to look after the system of care as well as the patient in front of them. Medical leadership is crucial to modernizing services. During training all doctors should be involved in quality improvement initiatives and all should learn how to champion change effectively.
How has the political environment affected your work?
The most recent impact has come from the national policy to introduce a separation between the commissioning of education and its provision. This has meant a reorganization of the way we work, with much of the work we did being commissioned from lead providers. While change is always disconcerting, there are real benefits to be realized from this one, in particular a better alignment between service and education planning.
What are your interests outside of work?
Looking after our four delightful grandchildren
If you were not a doctor, what would you do?
When I was at school I planned to write plays, but a medical career has sated my appetite for drama.
He'd try to sit, couldn't hold on for long,
Fidgety, restless, frustration would only prolong
Tried hard to listen to parents and teacher,
Distracted, voices sounding like a background clutter
Kept working on sitting listening and learning
Realized wasn't at par with kids and his sibling
This sentence would redundantly echo in his head
"Sit, listen, learn" you dumb head!!!
"How come life can't be better than what I feel?"
Why is it so hard for me to deal
My head hurts after constant listening,
Nothing I do is gratifying
They say, am not in same learning standard curve as other kids
My parents are worried for me, not understanding my needs
Have tried all avenues, anger, love , comfort, compassion,
Yet everyday is a challenge for them to find a solution
They interpreted his "not sitting still as restlessness",
Not listening and disruptive behaviour as impulsiveness
His attention level considered as poor learning skills
parents embarrassed, trying to overcome his hills
"Trust me”, He'd say, “you don’t understand, I'm trying my best"
Parents instead kept echoing sit, listen and learn, and accept it as a test
All this felt repetitive and redundant in his head,
Until someone said "maybe something is wrong with his brain instead"
Suggested see a doctor who might help clear the clutter away
Who observed his behaviour without decision to change him right away,
That's when he told the parents "Your child has had attention deficit disorder"
They felt was a mental taboo, and asked not to speak about it louder
The doctor insisted on strict compliance and periodic follow-up
Meds, mental stimulation exercises worked, felt no more like empty cup
Before he knew, he was sitting longer, nothing felt like clutter
Realized the deficit had prevented him from thinking better
Parents and doctors worked together, we salute them for the joint effort,
helped him evolve into the person altogether different
He listens to his inner and external suggestions alone and in group discussions,
Has learned realities of life, applying them in every day decisions
Sits down for hours working on his research projects
Sit, listen, learn, now all sound real, not mystical acts
The fundamental responsibility of an anesthesiologist is to maintain adequate gas exchange through a patent airway. Failure to maintain a patent airway for more than a few minutes results in brain damage or death1. Anaesthesia in a patient with a difficult airway can lead to both direct airway trauma and morbidity from hypoxia and hypercarbia. Direct airway trauma occurs because the management of the difficult airway often involves the application of more physical force to the patient’s airway than is normally used. Much of the morbidity specifically attributable to managing a difficult airway comes from an interruption of gas exchange (hypoxia and hypercapnia), which may then cause brain damage and cardiovascular activation or depression2.
Though endotracheal intubation is a routine procedure for all anesthesiologists, occasions may arise when even an experienced anesthesiologist might have great difficulty in the technique of intubation for successful control of the airway. As difficult intubation occurs infrequently and is not easy to define, research has been directed at predicting difficult laryngoscopy, i.e. when is not possible to visualize any portion of the vocal cords after multiple attempts at conventional laryngoscopy. It is argued that if difficult laryngoscopy has been predicted and intubation is essential, skilled assistance and specialist equipment should be provided. Although the incidence of difficult or failed tracheal intubation is comparatively low, unexpected difficulties and poorly managed situations may result in a life threatening condition or even death3.
Difficulty in intubation is usually associated with difficulty in exposing the glottis by direct laryngoscopy. This involves a series of manoeuvres, including extending the head, opening the mouth, displacing and compressing the tongue into the submandibular space and lifting the mandible forward. The ease or difficulty in performing each of these manoeuvres can be assessed by one or more parameters4.
Extension of the head at the atlanto-occipital joint can be assessed by simply looking at the movements of the head, measuring the sternomental distance, or by using devices to measure the angle5. Mouth opening can be assessed by measuring the distance between upper and lower incisors with the mouth fully open. The ease of lifting the mandible can be assessed by comparing the relative position of the lower incisors in comparison with the upper incisors after forward protrusion of the mandible6. The measurement of the mento-hyoid distance and thyromental distance provide a rough estimate of the submandibular space7. The ability of the patient to move the lower incisor in front of the upper incisor tells us about jaw movement. The classification provided by Mallampati et al8 and later modified by Samsoon and Young9 helps to assess the size of tongue relative to the oropharynx. Abnormalities in one or more of these parameters may help predict difficulty in direct laryngoscopy1.
Initial studies attempted to compare individual parameters to predict difficult intubation with mixed results8,9. Later studies have attempted to create a scoring system3,10 or a complex mathematical model11,12. This study is an attempt to verify which of these factors are significantly associated with difficult exposure of glottis and to rank them according to the strength of association.
Materials & methods
The study was conducted after obtaining institutional review board approval. Six hundred ASA I & II adult patients, scheduled for various elective procedures under general anesthesia, were included in the study after obtaining informed consent. Patients with gross abnormalities of the airway were excluded from the study. All patients were assessed the evening before surgery by a single observer. The details of airway assessment are given in Table I.
Table I: Method of assessment of various airway parameters (predictors)
Airway Parameter
Method of assessment
Modified Mallampati Scoring
Class I: Faucial pillars, soft palate and uvula visible.
Class II: Soft palate and base of uvula seen
Class III: Only soft palate visible.
Class IV: Soft palate not seen
Class I & II : Easy Intubation
Class III & IV: Difficult Intubation
Obesity
Obese BMI (≥ 25)
Non Obese BMI (< 25)
Inter Incisor Gap
Distance between the incisors with mouth fully open(cms)
Thyromental distance
Distance between the tip of thyroid cartilage and tip of chin, with fully extended(cms)
Degree of Head Extension
Grade I ≥ 90◦
Grade II = 80◦-90◦
Grade III < 80◦
Grading of Prognathism
Class A: - Lower incisor protruded anterior to the upper incisor.
Class B: - Lower incisor brought edge to edge with upper incisor but not anterior to them.
Class C: - Lower incisors could be brought edge to edge.
In addition the patients were examined for the following.
High arched palate.
Protruding maximally incisor (Buck teeth)
Wide & short Neck
Direct laryngoscopy with Macintosh blade was performed by an anaesthetist who was blinded to preoperative assessment.
Glottic exposure was graded as per Cormack-Lehane classification13 (Fig 1).
Figure 1: Cormack-Lehane grading of glottic exposure on direct laryngoscopy
Grade 1: most of the glottis visible; Grade 2: only the posterior extremity of the glottis and the epiglottis visible; Grade 3: no part of the glottis visible, only the epiglottis seen; Grade 4: not even the epiglottis seen. Grades 1 and 2 were considered as ‘easy’ and grades 3 and 4 as ‘difficult’.
Results
Glottic exposure on direct laryngoscopy was difficult in 20 (3.3%) patients.
The frequency of patients in various categories of ‘predictor’ variables is given in Table-II
Table II: The frequency analysis of predictor parameters
Airway Parameter
Group
Frequency (%)
Modified Mallampati Scoring
Class 1&2
Class 3&4
96%
4%
Obesity
Obese BMI (≥ 25)
Non Obese BMI (< 25)
28.7%
71.3%
Inter Incisor Gap
Class I : >4cm
Class II: <4cm
93.5%
6.5%
Thyromental distance
Class I: ≥ 6cm.
Class II: ≤6cm.
94.6%
5.4%
Head & Neck Movements
Difficult {class II & III (90˚)}
Easy {class I(>90˚)}
16%
84%
Grading of Prognathism
Difficult (class III)
Easy (class I + II)
96.1%
3.9%
Wide and Short neck
Normal neck body ratio 1:13
Difficult (Ratio≥ 1:13)
86.9%
13.1%
High arched Palate
Yes
No
1.9%
98.1%
Protruding Incisors
Yes
No
4.2%
95.8%
The association between different variables and difficulty in intubation was evaluated using the chi-square test for qualitative data and the student’s test for quantitative data and p<0.05 was regarded as significant. The clinical data of each test was used to obtain the sensitivity, specificity and positive and negative predictive values. Results are shown in Table III.
Table III: Comparative analysis of various physical factors and scoring systems
Physical factors and various Scoring Systems
Sensitivity ( % )
Specificity ( % )
PPV
( % )
NPV
( % )
Obesity
81.8
72.76
6.34
99.43
Inter incisor gap
18.8
94.14
6.6
98.1
Thyromental distance
72.7
96.5
32.0
99.4
Head and Neck movement
86.36
86.0
34.6
99.7
Prognathism
4.5
96.3
2.7
97.9
Wide and Short neck
45.5
87.9
7.8
98.6
High arched palate
40.1
99.38
60.0
98.67
Protruding incisor
4.6
95.9
2.5
97.79
Mallampati scoring system
77.3
98.2
48.57
99.5
Cormack and Lehane’s scoring system
100
99.7
88
100
Discussion
Difficulty in endotracheal intubation constitutes an important cause of morbidity and mortality, especially when it is not anticipated preoperatively. This unexpected difficulty in intubation is the result of a lack of accurate predictive tests and inadequate preoperative assessment of the airway. Risk factors if identified at the preoperative visit help to alert the anaesthetist so that alternative methods of securing the airway can be used or additional expertise sought before hand.
Direct laryngoscopy is the gold standard for tracheal intubation. There is no single definition of difficult intubation but the ASA defines it as occurring when “tracheal intubation requires multiple attempts, in the presence or absence of tracheal pathology”. Difficult glottic view on direct laryngoscopy is the most common cause of difficult intubation. The incidence of difficult intubation in this study is similar to that found in others.
As for as the predictors are concerned, different parameters for the prediction of difficult airways have been studied. Restriction of head and neck movement and decreased mandibular space have been identified as important predictors in other studies. Mallampati classification has been reported to be a good predictor by many but found to be of limited value by others14. Interincisor gap, forward movement of jaw and thyromental distance have produced variable results in predicting difficult airways in previous studies7,15. Even though thyromental distance is a measure of mandibular space, it is influenced by degree of head extension.
There have been attempts to create various scores in the past. Many of them could not be reproduced by others or were shown to be of limited practical value. Complicated mathematical models based on clinical and/or radiological parameters have been proposed in the past16, but these are difficult to understand and follow in clinical settings. Many of these studies consider all the parameters to be of equal importance.
Instead of trying to find ‘ideal’ predictor(s), scores or models, we simply arranged them in an order based on the strength of association with difficult intubation. Restricted extension of head, decreased thyromental distance and poor Mallampati class are significantly associated with difficult intubation.
In other words patients with decreased head extension are at much higher risk of having a difficult intubation compared to those with abnormalities in other parameters. The type of equipment needed can be chosen according to the parameter which is abnormal. For example in a patient with decreased mandibular space, it may be prudent to choose devices which do not involve displacement of the tongue like the Bullard laryngoscope or Fiber-optic laryngoscope. Similarly in patients with decreased head extension devices like the McCoy Larngoscope are likely to be more successful.
Conclusion
This prospective study assessed the efficacy of various parameters of airway assessment as predictors of difficult intubation. We have find that head and neck movements, high arched palate, thyromental distance & Modified Malampatti classification are the best predictors of difficult intubation.
The implementation of Modernising Medical Careers (MMC) significantly altered the structure of postgraduate medical education in the UK. MMC oversees the training of all UK doctors from the outset of their career, the first two years of which comprise the Foundation Programme. Successful completion of the Foundation Programme is based upon doctors’ Foundation Portfolios in which they must demonstrate achievement of essential competences and work-based assessments. Doctors are also encouraged to attain additional competencies and to develop their portfolio further. Voluntary educational activities undertaken outside the workplace form the basis of this.
Application into Specialist Training following the Foundation Programme is highly competitive, with an average of three applicants for each post in 20081. Points-based shortlisting criteria are used to select candidates, and are based upon the contents of the Foundation Portfolio and application form. This means that points can be scored for activities not required for completion of the Foundation Programme, such as Royal College membership examinations and course attendance. Foundation Programme doctors undertake voluntary activities to improve their portfolios however no quantifiable evidence currently exists as to what doctors undertake in this respect.
We aimed, therefore, to determine firstly what voluntary educational activities Foundation doctors are undertaking. We also aimed to establish their underlying motivating and deterring factors, financial costs incurred, and use of annual and study leave and ‘specialty taster days’, to assess the overall extent and impact of portfolio activities. The authors hope the results are useful in informing medical students and Foundation trainees of the scope of activities of their peers, and in advising supervisors of the activities of their trainees.
Methods
A two page anonymous questionnaire was posted at random to 100 Foundation doctors across five hospitals in East Midlands Deanery (50 Foundation Year 1, 50 Foundation Year 2). See Appendix 1
Demographics
The first section of the questionnaire asked for the sex and grade of respondents (Foundation Year 1 (FY1), or Foundation Year 2 (FY2))
Activities
Respondents were directly asked whether they were attending courses or conferences, using on-line e-learning packages, joining professional bodies/societies or sitting higher professional examinations such as royal college membership examinations/higher degrees.
Cost
Doctors were asked how much money (excluding that of teaching allowances) and days of annual leave they used on the above activities. They were also asked how many of their allowed ‘specialty taster days’ they had taken during each year.
Motivating and deterring factors
Doctors were asked to rank from a list the motivating and deterring factors determining what activities they were undertaking.
Professional development
Doctors were finally asked to rank which educational activities they thought would make them a better overall Foundation doctor.
Results
Response rate was 49% with 49 doctors returning the questionnaire. Of these 69.4% (n=34) were Foundation Year 1 (FY1) and 30.6% (n=15) were Foundation Year 2 (FY2), with 53.1% female and 46.9% male.
Activities
Overall 89.8% (n=44) of respondents were engaged in voluntary educational activity (FY1 85.3%, FY2 100%). The most common mode (89.8%, n=44) was e-learning packages (FY1 85.3% (n=29), FY2 100% (n=15)) followed by joining/ becoming a member of professional bodies or societies ie BMA etc (73.5%, n=36) (FY1 64.7% (n=22), FY2 93.3% (n=14)), followed by courses (69.4%, n=34) (FY1 55.9% (n=19), FY2 100% (n=15)), undertaking higher qualifications (36.7%) (FY1 14.7% (n=5), FY2 86.7% (n=13)) and attending conferences (14.3%) (FY1 14.7% (n=5), FY2 13.3% (n=2))– See figure 1.
Fig 1 – A graph to show the percentage of Foundation year 1 and 2 doctors involved in each mode of voluntary educational activity.
Of the courses attended, 25.5% pertained to teaching, 25.5% to advanced life support and 18.0% to surgical skills. The remaining 31% of courses related to a variety of other interests such as anaesthetic skill days, expedition medicine courses, and sub speciality specific courses such as movement disorder workshops and laparoscopic surgery.
Cost
The mean amount spent by Foundation Year 1 Doctors on these activities was £581 (range £0 - £3100) Foundation Year 2 Doctors spent significantly more at £1842 (range £0 - £3500). The mean cost per activity is shown in figure 2.
Fig 2 – A graph to show the mean amount of money spent by foundation year 1 and 2 doctors on each mode of
educational activity.
The mean number of days of annual leave used by doctors for these activities was 2.8 in FY1 and 5.3 in FY2, therefore combining to average 8.1 days in total that would be used over the whole foundation programme. Of their five allowed ‘taster – days’ the mean number attended was 1.3 and 2.9 by FY1 and FY2 doctors respectively. Only 20.4% of doctors took their full entitled allowance.
Motivating and deterring factors
The most common factor motivating Foundation doctors to undertake portfolio educational activities was the belief they would help candidates achieve a specialist training post (67.3%). Only 12.2% engaged primarily out of personal interest with 8.2% to improve their medical competence (See Table 1).
Primary Motivating Factor
FY1 Doctors
FY2 Doctors
Overall
Percentage (%)
Number
Percentage (%)
Number
Percentage (%)
Number
Improve chance of specialist training post
58.8
20
86.7
13
67.3
33
Personal interest
14.7
5
6.7
1
12.2
6
To improve medical competencies
11.8
4
0
0
8.2
4
On advice of seniors
11.8
4
6.7
1
10.2
5
Other
2.9
1
0
0
2
1
TOTAL
100
34
100
15
100
49
Table 1 – A table to show the primary motivating factors of foundation doctors to undertake voluntary portfolio educational activities.
The most common deterrents were a lack of study leave (42.9%), lack of annual leave (22.4%) and expense (20.4%) (See Table 2).
Primary Deterring Factor
FY1 Doctors
FY2 Doctors
Overall
Percentage (%)
Number
Percentage (%)
Number
Percentage (%)
Number
Lack of study leave
38.2
13
53.3
8
42.9
21
Lack of annual leave
23.5
8
20
3
22.4
11
Financial expense
17.6
6
26.7
4
20.4
10
Lack of career choice
11.8
4
0
0
8.2
4
Not relevant to Foundation doctors
8.8
3
0
0
6.1
3
Other
0
0
0
0
0
0
TOTAL
100
34
100
15
100
49
Table 2 – A table to show the primary deterring factors listed by foundation doctors that deter them from undertaking voluntary educational portfolio activities.
Professional development
The final section of the questionnaire asked respondents which educational activity they felt was most influential in making them a better Foundation doctor. Interestingly 83.7% (n=41)(FY1 88.2% (n=30), FY2 73.3%( n=11)) felt on-call experience was most influential, with only 6.1% (FY1 2.9% (n=1), FY2 13.3% (n=2)) citing courses, 6.1 % (FY1 2.9% (n=1), FY2 13.3% (n=2)) e-learning packages and 4.1% (FY1 2.9% (n=1), FY2 6.7% (n=1)) qualifications (Fig 3).
The academic conference was ranked least influential by 89.8% (n=44) (FY1 85.3% (n=29), FY2 100% (n=15)) of respondents, followed by 6.1% (n=3) (FY1 8.8% (n=3), FY2 0.0% (n=0)) citing courses, and 4.8% (FY1 5.8% (n=2), FY2 0.0% (n=0)) e-learning packages (Fig 3).
Fig 3 – The above graph was the response of Foundation doctors when asked which activities they thought were most and least influential in making them a better foundation doctor.
Discussion
This survey suggests that Foundation doctors undertake numerous activities at significant personal expense to expand their portfolios, and are primarily motivated by a belief that this will increase their chance of obtaining higher specialist training posts.
Educational activities and opportunities
The advent of the European Working Time Directive and New Deal document2 have resulted in junior doctors working considerably fewer hours than in previous years. This has led some authors to conclude that the quality of learning opportunities in the working environment has reduced 3 .With 89.8% of Foundation doctors in this survey actively undertaking some form of educational activity outside of work, this suggests that Foundation doctors may be going some way to re-dressing this balance. It may also come as a surprising yet reassuring figure to Foundation Programme educational supervisors who may be unaware of the education of their trainees outside of work.
We found the most popular mode of educational activity to be the e-learning package. E-learning is an effective and extensively employed method for both distance learning4 , and as an adjunct to “traditional” lecture-based techniques across several disciplines. It has also been shown to be a well received and practical method of supplementary education for doctors5 and our study suggests this is particularly true for the Foundation years. The reasons why e-learning is popular in this group was not explored, but its low cost, easily accessible and modular nature may have some part to play. As medical schools continue to utilise this modality to a greater extent, its follow-through into the Foundation years and postgraduate medical education in general is inevitable. With such high uptake, e-learning packages are a promising format for delivering education to this group.
Popular courses undertaken by Foundation doctors related to obtaining teaching skills, or advanced life support. This suggests that Foundation doctors place a high emphasis on teaching and training, and on recognising and managing acutely ill patients. These are two core objectives of the Foundation Programme. However, one could also argue that doctors undertaking courses outside work to achieve essential competencies casts doubt on the ability of the Foundation Programme to deliver them. We submit that educational supervisors are in a prime position to appraise this issue.
The least popular mode of activity in our survey was the attendance of a medical conference. It was also regarded as least influential by 89.7% of respondents. There is a global shortage of medical academics6, and as conferences serve to introduce junior doctors to academic medicine and research, perhaps academic doctors should take a more prominent role in promoting conferences as an educational activity.
Time and money
Doctors incur the majority of their costs attending courses with Foundation Year 1 and 2 doctors spending £365 and £1120 respectively on this area (fig 2). This highlights the possibility that Foundation doctors may be prone to financial exploitation by a growing number of courses which are often unvalidated. As senior advice was the primary motivating factor for only 10.2% of activities, this suggests that educational supervisors could play a greater role in assessing, appraising and advising their trainees on the courses best suited for them and their professional development.
The overall financial cost incurred for all portfolio educational activities was £581 for FY1 and £1842 for FY2. Whilst previous estimates have been made in this area, this is the first specific to the Foundation Programme and to include non-mandatory outlay, and represents 3 % and 7% of the basic salary for FY1 and FY2 doctors before tax. As our survey found financial expense to be a significant deterrent to portfolio activity (20.4% of respondents), a potentially serious implication is that expense will limit the uptake of postgraduate education in the future. From the authors’ own experience such professional costs are not explained to medical students and that this issue merits more attention in undergraduate education.
A lack of study leave was highlighted as the main deterring factors to educational portfolio activities (42.9%). This is of particular interest as only 20.4% of Foundation doctors use their full ‘taster-day’ entitlement. These ‘taster days’ are a fundamental aspect of the Foundation Programme, offering doctors the opportunity to explore a specialty for up to five days per year. However, whilst doctors fail to utilise them, they take an average of 8.1 days’ annual leave over the two year programme for educational purposes.
The reasons behind this are unclear, but may be due to a lack of awareness of these ‘taster days’. With a lack of study leave hindering educational activities, a potential solution might be for doctors to have the option to utilise ‘taster days’ as a form of study leave.
Professional education and motivation
Between 1998 and 2005, the number of medical students in the UK has risen by 57%7. Increasing numbers of doctors and decreasing working hours may reduce the amount of on-call experience for those in the Foundation Programme. However, it is this on-call experience that is regarded by the vast majority (83.7% in this study) as the most important educational modality in making them a better foundation doctor. Although time and money are perceived as barriers to portfolio educational activities it appears that doctors value this on-call experience above all. With key aims of the Foundation Programme being training and emergency competence, efforts must be made to preserve this experience.
Whilst Foundation doctors are engaging in numerous portfolio activities, their underlying motivations are interesting. It appears this group are primarily motivated not by the educational benefits of these activities, but rather by their perceived ability to help attain a specialist training post. This could suggest that the educational portfolio is at risk of becoming a ‘tick-box’ means for career progression, rather than addressing limitations, exploring interests and aspiring to clinical excellence. This contrasts with the conclusions of the most recent assessment of postgraduate medical education in the UK8.
As competition for jobs appears to be driving Foundation doctors to undertake educational activities it remains unclear whether engaging in these activities to obtain jobs, rather than competencies, reduces their validity and educational outcomes. Furthermore it is unclear whether trainees will be more likely to achieve their overriding aim of obtaining a specialist training post through these activities. Determining the career outcomes of doctors undertaking these activities will provide an evidence base, allowing educational supervisors to optimally advise their trainee in portfolio educational activities.
Conclusions
This is a baseline survey quantifying portfolio educational activities in the Foundation Programme, applicable to trainees and supervisors alike. Whilst the latter are well aware of assessments such as DOPS (Direct Observation of Procedural Skills) and CbD’s (Case-based Discussions), they are often less aware of the voluntary educational activities of their trainees.
Our study would suggest that Foundation Programme doctors are a cohort driven to undertake numerous voluntary educational activities, albeit largely to achieve career progression rather than accrue educational benefit. To this end they undertake activities such as e-learning, courses and higher qualifications at the expense of conferences. For this they spend significant amounts of money and leave, yet continue to site a lack of traditional study leave as a barrier to further educational development. The authors would suggest that further work is needed to develop the role of educational supervisors in the Foundation Programme in harnessing the motivation of their trainees, and guiding them appropriately.
Key Points
·Foundation Doctors spend significant amounts of time and money on voluntary educational activities.
·Foundation Doctors are primarily driven to undertake these activities due to the belief that it will help them obtain specialist training posts.
·A lack of study leave is the primary barrier to voluntary education.
·The academic medical conference is viewed as the activity least likely to improve medical competence, whereas on-call experience is regarded as the most likely.
·Foundation Programme educational supervisors are best placed to guide their trainees towards the most appropriate educational modalities
'The following article is another in a series of critical essays examining the current status of Psychiatry in the NHS'
In therapy
"Good advice is often a doubtful remedy but generally not dangerous since it has so little effect.’ Carl Jung (1875-1961)
The word ‘therapy’, as defined by the Oxford Dictionary as ‘to treat medically’, is derived from the Greek therapeuein, meaning to minister. Nowadays it can denote any treatment from massage therapy to music therapy. In mental health it has become synonymous with counselling or psychotherapy. Drug therapy, believe it or not, is included in the definition, though is frowned upon by many in the mental health industry, and is often the subject of derisory and ill-informed comments from both medical and non-medical practitioners. Many medical doctors who decide to embark on a career in psychotherapy generally forfeit all their knowledge of physiology, biochemistry, anatomy, pharmacology and many other subjects, in the pursuit of an ideal that somehow all life’s problems can be resolved through a particular brand of talking therapy. One wonders why they spend many years in medical school and in postgraduate teaching. Why devote all that time studying subjects, which have no relevance to common or garden psychotherapy? Would it not be more practical for those who specifically want to pursue such a career in psychotherapy to enrol in a psychotherapy training college, and then ‘specialise’ in whatever form of psychotherapy they aspire to? Such individuals, instead of wasting years training as medical doctors, could receive a diploma or certificate to practise psychotherapy. Likewise, you do not need to be a neurosurgeon to become a neuroscientist, or a physician to study virology. For some reason, however, scientists, including innovators in the fields of medicine and surgery, seem to be disparaged by both medical and non-medical psychotherapists, and seen as persons who can only conceptualise individuals as molecules, or objects to be examined with sophisticated machinery. Psychotherapy seems to induce a state of delusional intellectualism among some of its members, it would seem. Such intellectualism, if it be described as such, portrays an affected and misguided arrogance towards matters scientific. Yet curiously, published papers in mental health journals or in the press, when written by ‘experts’ are often interspersed with the words ‘science’ or ‘scientific’ even when they are little more than observations, studies, or comparisons between populations receiving a particular mode of this therapy or that therapy. We are not talking about advances in the treatment of neuroblastoma or other cancers here or a cure for dementia. It is one thing to describe Addison’s disease; it is another to discover the cause.
The panacea
‘Nice people are those who have nasty minds.’ Bertrand Russell (1872-1970)
The necessity for ‘therapy’ now seems to be deeply ingrained in our culture and the army of pop psychologists and psychiatrists, non-biological therapists, and agony aunts increases, it seems, by the day. In the media what is quoted as ‘research’ and passed off as science, is often no more than a street survey, or opinion poll on a current fad or passing headline grabber, rather like those ‘we asked a hundred people’ questions posed on popular family quiz shows. The therapy bandwagon rolls on and is quite lucrative if you are fortunate enough to capture the market with your own brand of snake oil cure to life’s woes. Admission is free to the Mind Industry and furthermore, there are no compulsory, nationally agreed standards for the conduct and competence of non-medical psychotherapists and counsellors. Even if removed from the membership of their professional body for inappropriate conduct say, therapists can continue to practise, there being no legal means to prevent them from doing so. Most members of the public are unaware of this lack of statutory regulation. It is not surprising then that many ‘therapists’ flagrantly sell their product and any attempt to question the authenticity of a particular ‘cure’ is met with vitriol and feigned disbelief. After all, one has to guard one’s source of income. The author Richard Dawkins was subject to such venom and hostility when he dared to question the reasons and need for religion in his book The God Delusion. Woe betide any practitioner who dares to criticise the favourable results of ‘carefully conducted positive outcome studies’ on, say, cognitive therapy, even when one’s own clinical experience attests to the opposite. Of course, some therapies work, some of the time, but not because of the outlandish claims made for them; rather, they work best when a ‘client’ harnesses the energy and motivation to get better and ‘chooses’ one brand of therapy over another, or feels at ease with a therapist who is empathic and understanding, much as one might confide in a best friend, rather than any inherent benefit from the ‘therapy’ itself. Certain therapies work because they have an intrinsic behavioural component to them, for example, dialectic therapy for ‘borderline personality’ disorder (as real a condition as ‘sociopathic’ disorder), or cognitive behaviour therapy for obsessive-compulsive disorder and phobic disorders. With other therapies one would almost have to admit feeling better given the enormous sums of money involved say, for a one-week course in a therapeutic healing centre. After all, it would be painful to admit an expensive holiday being a waste of time when a lot of hard-earned money has been spent.
The enemy within
‘Sorrow and silence are strong, and patient endurance is godlike.’ Henry W Longfellow (1807-1882)
Why does one who is vehemently opposed to psychiatry want to become a psychiatrist? Do as many medically qualified psychotherapists as non-medical therapists dismiss the role of biology in the causation of mental health disorders? Why do we speak of anti-psychiatrists and not anti-cardiologists? What about the claims for psychotherapy itself? Is it possible truthfully to scientifically evaluate whether or not it works? Criticism comes from within its own camp. To paraphrase one well-known psychologist, ‘Psychotherapy may be good for people, but I wish to question how far it changes them, and I strongly cast doubt on any assumption that it cures them’.1 The irony now is that the therapies themselves are being ‘dumbed down’, sometimes aimed at a younger audience to court popular appeal. Trite and stultifying sound bites such as ‘getting in touch with your feelings’, ‘it’s good to cry’, ‘promote your self-esteem’, ‘search for your inner child’, and many other inane phrases flourish. Failure to display distress or intense emotional turmoil outwardly (say, after a bereavement), is seen as weak, maladaptive, and abnormal, instead of being viewed as a strength, a mark of dignity, and an important way of coping. The corollary of course, is the spectacle of some psychiatrists, because of their medical training, endeavouring to explain every aspect of mental health psychopathology in terms of neurotransmitters and synapses. And then there is the scenario of non-medical ‘scientists’ critically evaluating and expounding on subjects completely outside their remit, for example, uttering pronouncements say, on the neuropharmacology of depression, or the reputed reduction in hippocampal volume caused by posttraumatic stress disorder, when they are not qualified to do so, having only a superficial knowledge of pharmacology and/or neuroimaging respectively. Instead of asking the engineer’s advice on the safety strength of a steel column supporting a bridge, why not ask the carpenter! The absurdity knows no bounds.
It seems that all life’s problems are self-inflicted or caused by ‘society’ or faulty upbringing. Back to the schizophrenogenic mother then. It is up to the client to seek the therapist’s help and advice by way of talking cures to set him/her on the road to recovery. To be fair to non-medical therapists and lay counsellors, some psychiatrists do not believe in the genetics of, or neurobiological contribution to, mental health. Some even believe mental illness to be a myth! Imagine an electrician who does not believe in electricity, or to compare like with like, an oncologist who does not believe in cancer. Many decades ago the psychiatrist Thomas Szasz described psychology as pseudoscience and psychiatry as pseudomedicine 2 .Since then others have reinforced Szasz's conclusions. Who can blame them? To illustrate by one example, many court cases (particularly in the forensic field) involve a psychiatrist/psychologist giving ‘expert’ testimony for the defence with the prosecution in turn calling for a psychiatrist/psychologist to offer a contradictory opinion on say, the defendant’s fitness to plead. The prosecution says the defendant is acting, the defence argues the defendant is suffering from a mental disorder. No surprises there as to why psychiatry has descended into farce.
Psychotherapy is all talk
‘There is no art to find the mind’s construction in the face.’ William Shakespeare (1564-1616)
One outspoken critic has had the courage, some might say the audacity, to assert that the psychology/psychiatry therapy hoax is still as widespread and dangerous as it was when the neurologist Sigmund Freud first invented what she describes as ‘the moneymaking scam of psychoanalysis.’ 3. Briefly, at the core of psychoanalysis lies the principle that the id, ego and superego (not originally Freud’s terms) are considered to be the forces underlying the roots of psychological turmoil. The id, or pleasure principle, is in conflict with the superego or conscience (the conscious part of the superego) and the resultant outcome is mediated by the ego. Any interference with this delicate balance results in symptoms. However, this simplistic theory has come in for much criticism over the years and many scholars now consider the claims of psychoanalysis as having little credibility. It is not philosophy and it is certainly not science. Research in this area is fraught with even more methodological problems than say, with cognitive therapy studies. There is no way of testing analysts’ reports or interpretations reliably, and their conclusions are speculative and subjective. One eminent psychotherapist pronounced ‘as far as psychoanalysis is concerned, the logistical problems of mounting a full-scale outcome study are probably insurmountable.’4 It is impossible to develop a truly valid research protocol in either cognitive or psychoanalytic treatments to account for all the subtle, different variables that make individuals so unique. How can one research the mind? There are no specific blood tests and brain investigations that diagnose mental illness in the same way one might diagnose neuroleptic malignant syndrome or Parkinson’s disease respectively, at least not yet. Measuring scales are a very crude way of conducting research into mental health, and are not always objective, particularly when researchers are keen to have a favourable result. This applies also to drug trials, I hasten to add.
Many people feel better simply by seeing and discussing their troubles with a friend, their physician, a member of the clergy, or their next-door neighbour for that matter. Such individuals are usually more than prepared to give considerable time to listening sympathetically and offering possible solutions to often intricate and personal problems. Nonetheless, talking about a negative experience or trauma does not necessarily alleviate the distress or pain felt by that event. One wonders then why a ‘client’ would be expected to get better simply by insisting changing his/her ‘negative set’, for instance, by doing homework exercises for the teacher/therapist. No doubt countless individuals move in and out of therapy and support groups; some may even benefit from self-help books. However, it is the earnest fatuity in such books that is so tragically funny, and that people take them so seriously is even more worrying.5 Some ‘clients’ find therapy a waste of time, but since they do not return for their follow-up sessions it is assumed they are well, or have moved on, or are simply unsuitable. On the other hand, there are countless individuals who find an inner resilience to withstand and improve themselves through their own volition, with a few prompts on the way, rather like finding one’s way through unfamiliar territory with the aid of a street map. Likewise, drug treatment is of very little value if one’s relationships are in disarray, or an individual is in great debt, for instance. The ‘worried well’ simply require practical help from appropriate advisors, not health care professionals and should they wish to spend money on counsellors and therapists, that is for them to decide.
Common sense and nonsense
‘He who exercises his reason and cultivates it seems to be both in the best state of mind and dear to the gods.’ Aristotle (384 -322 BC)
We have now reached a point where minor setbacks and irritations are seen as obstacles to be treated. By adopting this attitude we are succumbing to the might of the Therapies and Mind Industry, eliminating those experiences that define what it is to be human. Individuals freed from moral duty are now patients or victims. This abnegation, abdication and suffocation of individual responsibility for the sake of self-esteem is creating a society which needs only to be placated and made content.3 Anything that causes dismay or alarm is a trauma, and therefore needs therapy. Any crime or misdemeanour is not our fault. We have a psychological condition that absolves us from every sin or ailment. The opposite scenario is whether through scientific ignorance or a refusal to acknowledge that the human genome may play a part, perhaps both, some therapists accuse organic theorists of being ‘too ready’ to favour biological models, believing that dysfunctions in neuronal circuits have no part to play in ‘disorders of the psyche’. We are not all at the mercy of our neurotransmitters, they cry. Neither view is accurate. Psychoanalytic psychotherapy is no exception either. The nub of psychoanalysis is the therapist’s analysis of transference and resistance, which distinguishes this form of psychotherapy from all other types. With this brand of therapy absurd interpretations abound, leading one psychotherapist to openly admit that ‘jargon is often used to lend a spurious air of profundity to utterances which are nothing of the kind’.6 The author Frederick Crews writes: ‘I pause to wonder at the curious eagerness of some people to glorify Freud as the discoverer of vague general truths about human deviousness. It is hard to dispute any of these statements about “humans”, but it is also hard to see why they couldn't be credited as easily to Shakespeare, Dostoevsky, or Nietzsche - if not indeed to Jesus or Saint Paul - as to Freud’.7
One particular concept that is difficult to sustain is that repressed memories of traumatic events lead to psychiatric disorders. That such repressed memories in some instances encompass sexual preferences towards one or other parent, is even more perplexing to most people. The Oedipus and Electra complexes, expounded by Freud and Jung respectively, were founded on Greek mythology, hardly the basis for scientific study. Psychoanalysis set out to cure a disorder by uncovering repressed memories. However, traumatic memories by their very nature are actually difficult to ‘repress’. Of course individuals do forget. This is a normal part of the human condition. Memories are recollected or resurrected by association of ideas; multiple-choice format questionnaires work on the same principle. Familiar sights, smells and sounds, as famously depicted in Marcel Proust’s A La Recherché de Temps Perdu (‘and suddenly the memory revealed itself. The taste was that of the little piece of madeleine cake‘) often conjure up previously ‘forgotten’ memories, what used to be described as involuntary memory. Forgetting does not always equate with psychopathology; forgetfulness is common and becomes more common with age. In psychiatric treatment electroconvulsive therapy (ECT) is associated with a high prevalence of memory disturbances, often irreparable. With organic disorders, memory channels or traces are damaged, for example, through alcohol, or subcortical injury.8 However, even in Alzheimer’s disease, at least in the early stages, memories are often not totally erased, a fact utilised in reminiscence therapy. Memories in healthy people are not suppressed or repressed. Not wanting to talk about some painful issue is not necessarily ‘denial’, nor does it denote a fear of unleashing repressed/suppressed memories.
After the Trauma
‘We seldom confide in those who are better than ourselves.’ Albert Camus(1913-1960)
Mental health care workers often speak of posttraumatic stress disorder where memories of an especially overwhelming and upsetting event are ever-present and particularly distressing, leading to panic feelings, flashbacks, and recurrent nightmares. Such memories may be easily evoked, sometimes merely by watching a documentary, reading a news item, listening to a radio programme, and so forth. In other words, patients are all too quickly reminded of them - the memories are very vivid, not repressed. Often people simply do not want to be reminded. They are not in denial – they are simply avoiding the issue and should be allowed to do so. Whereas formerly such traumas were associated with catastrophic events such as the Holocaust or major natural disasters, nowadays the term posttraumatic has become over-inclusive. Some people have ‘trauma’ imposed on them in the form of invidious suggestions that they were subject to abuse of one form of another. On the contrary, there is no evidence that any of Freud’s patients who came to him without memories of abuse had ever suffered from sexual abuse. Furthermore, Freud ensured that his theory of repression could not be easily tested, and in practice the theory became ‘unfalsifiable’.9 Traumatic memories of abuse are very difficult to forget, and patients struggle to suppress them, in the author’s experience.
Undoubtedly, some memories are painful, and generally speaking, there are individuals who want to ‘forget the past’ in order to ‘move on’, which would strike most of us as being a reasonably healthy approach in certain circumstances. Many patients, for instance, would want to ‘move on’ to a healthier, more satisfying relationship, change job, alter their lifestyles, and so forth. When it comes to major catastrophic events, memories are not preconscious or unconscious: they are very often disturbingly real, and very difficult to live with; in many cases time is the only ‘healer’. Some traumatic memories never fade and in many cases no amount of talking will erase the painful memories. Witness the Holocaust survivors and those subject to horrendous atrocities throughout the Pol Pot regime, for example.
It is difficult to ascertain therefore whether so-called defence mechanisms such as repression or denial are truly separate entities operating in the human psyche, or merely part of a conscious natural survival instinct to ward off painful stimuli. How can such mechanisms be unconscious when it is commonplace to hear of people ironically talking about ‘being in denial’? Individuals who attempt to overcome their own addictions for example, are seen as suffering from a ‘perfectionist complex’, and reluctant to admit their failings. In other words, acknowledge you are unable to cope and are in denial about the true nature of your affliction and you will then be offered a place in the recovery programme.5 Therapists see denial as a mechanism deployed to avoid the pain of acknowledging a problem and taking action to seek help. It is not medical bodies but grass roots campaigners who are foremost in demanding that every ‘traumatic’ or ‘problematic’ condition be medicalised, creating more opportunities for counselling intervention.10 Hence the new breed of disorders to include shyness, inattentiveness, road rage, trolley rage, sex addiction, shopping addiction, internet addiction and so forth.
Beyond therapy
‘We are all born mad. Some remain so.’ Samuel Beckett (1906-1989)
Talking therapy is now the new religious cult and is what people have now turned to in order to find solace or answers (‘discover your real self’), and even cope with often inconsequential day-to-day events. The constant, pervasive emphasis on counselling diminishes the capacity of healthy people to confront commonplace problems they encounter in ordinary day life. Normal variants in behaviour are considered pathological and ‘psychologised’ or ‘medicalised’. Psychobabble prevails. We all need therapy or a pill. More and more ‘disorders’ are being invented. The endless proliferation and demand for ‘expertise’ in all areas of life is eroding the willingness of those who are best positioned to offer at least measured advice, accumulated from years of experience. There are no ‘experts in living’ and some individuals need to steer away from their excessive dependency and seeking self-approval of others who claim to be. Kierkegaard once wrote of people ‘taking refuge in a depersonalized realm of ideas and doctrines rather than confronting the fact that everyone is accountable to himself for his life, character and outlook’.11 In the words of John Stuart Mill, ‘Ask yourself whether you are happy, and you cease to be so.’
Headaches associated with or occurring around sexual activity have been recognized since the time of Hippocrates [1, 2]. Wolff [3] discussed headache during sexual activity in 1963. However, these headaches started to be formally reported in the 1970s, first by Kitz in 1970 [4] and then Paulson [5] and Martin [6] in 1974. The first published study was by Lance in 1976 [7].
Classification
This type of headache has been given many different names: benign sex headache (BSH), benign coital headache, coital cephalgia, orgasmic cephalgia, primary headache associated with sexual activity (PHSA), coital ‘thunderclap’ headache, primary thunderclap headache (PTH), orgasmic headache (OH) and preorgasmic headache.
In 2004the International Headache Society [8] classified HSA as a distinct form of primary headache.
These benign HSA are bilateral headaches, precipitated by sexual excitement (masturbation or coitus) occurring in the absence of any intracranial disorder and which can be prevented or eased by ceasing activity before orgasm. Type 1 consists of a bilateral, usually occipital, pressure-like headache that gradually increases with mounting sexual excitement. Type 2 headaches have an explosive, throbbing quality and appear just before or at the moment of orgasm. These often start occipitally but may generalize rapidly [9].
However, there are individuals who experience patterns of HSA that do not fall within the classifications and are included as a subgroup of HSA with unusual psychopathology [10]. For example, Paulson and Klawans [5] described a rare type postural sexual headache after coitus, which is present on standing, eased by lying, accompanied by a low CSF pressure, and persists for several weeks.
International Headache Society diagnostic criteria - ICHD-2(7) classification for HSA
4.4 Primary headache associated with sexual activity
4.4.1 Pre-orgasmic headache
A. Dull ache in the head and neck associated with awareness of neck and/or jaw muscle contraction
and fulfilling criterion B.
B. Occurs during sexual activity and increases with sexual excitement
C. Not attributed to another disorder
4.4.2 Orgasmic headache
A. Sudden severe (“explosive”) headache fulfilling criteria B
B. Occurs at orgasm
C. Not attributed to another disorder
7 Secondary headache disorder
7.2.3 Headache attributed to spontaneous (or idiopathic) low CSF pressure
Prevalence
HSA are not common but it is generally felt that they are under-reported due to patient embarrassment [1] at telling health professionals when their headaches occur. Prevalence in the general population is reported at around 1% [11, 12] and is greater in men than in women, by 3-4 times [11, 13-16]. There appear to be two peak times of onset: in the early 20s and then around age 40 [17]. About 22% of HSA are Type 1 and 78% are Type 2 [18]. The male:female ratio is the same for Type 1 and Type 2 headache.
Pathophysiology
HSA are not clearly understood but by definition lack serious underlying disease. They are however, unpleasant, frightening, repetitive and episodic. The clinical characteristics of Type 1 suggest a relationship with tension/muscular contraction headaches [2, 13, 15, 16]. There is a significant association between the risk of having more than one cluster of HSA and the presence of tension headaches or migraine [11, 14-17, 19-21]. Biehl [11] concluded that the association between migraine and HSA is bilateral. The prevalence of migraine in HSA patients is 25-47% [15, 16, 20]. Ostergaard [14] showed that the presence of concomitant migraine or tension headache was significantly associated with the recurrence of periods lasting weeks to months in which HSA occurred. Patients without another primary headache often have only one HSA period or episode and a more favourable prognosis . Migraine is co-morbid in 30% of Type 2 as opposed to 9% with Type 1. Co-morbidity is also seen in exertional headaches, 35% of Type 2 and 9% Type 1[17, 18]. There can be simultaneous onset of benign exertional headache (BEH) and HSA [22] as well as HSA after a history of BEH [16, 22].
Several drugs have been linked in case reports to sexual headaches associated with neurologic symptoms: Amiodarone [23], birth control pills [24], pseudoephedrine [7] and cannabis [25].An interesting more recent addition to HSA is that resulting from the use of PDE5 medication to assist in erectile difficulties [26, 27].
In type 2 headaches, increased intracranial pressure secondary to a Valsalva maneuver during orgasm has been proposed as a possible mechanism. Blood pressure may increase by 40-100mmHg systolic and 20-50mmHg diastolic during orgasm [7, 28-30]. A possible disruption of autoregulation of the cerebral vasculature has also been proposed [31-33].
Classic presentation
A male patient, middle-aged, in poor physical shape, mildly to moderately overweight, and mildly to moderately hypertensive [34]. In women muscle contraction and psychological factors are often involved [34].
The typical story is that the headache occurs during sexual activity, is bilateral and stops or is less severe if sexual activity stops prior to orgasm. The duration varies from 5 minutes to 2 hours if sexual activity stops and from 3 minutes to 4 hours, with the possibility of milder symptoms up to 48hours, if activity continues.
Differential diagnosis
With the first episode it is absolutely mandatory to exclude potentially life threatening and disabling causes. A thorough history and neurological examination with the option of imaging studies and CSF examination must be conducted.
Type 2 explosive “thunderclap” headaches can be secondary to subarachnoid haemorrhage, aneurysms without obvious rupture, intracerebral haemorrhage, pituitary apoplexy, venous sinus thrombosis, cervical artery dissection, subdural haematoma, haemorrhage into an intracranial neoplasm [35], cerebral tumour [36], intracranial hypotension and hypertension, significant cervical spine disease, and ischaemic stroke [37-43] and these serious conditions need to be excluded before an HSA diagnosis can be given. HSA may present similarly to paroxysmal headaches caused by phaeochromocytoma [44].
Sexual intercourse is reported as a precipitating cause of subarachnoid haemorrhage in 3.8% to 12% of patients with bleeding from a ruptured aneurysm [35].
Course of the disease
The unpredictable clinical course falls into 2 temporal patterns: an episodic course with remitting bouts, and a chronic course [20]. In most cases the headaches occur in bouts that recur over periods of weeks to months before resolving [16, 45].
The episodic type is defined as a bout of at least 2 attacks occurring in ≥ 50% of sexual activity followed by no attack for ≥ 4 weeks despite continuing sexual activity. The chronic course is defined as ongoing HSA attacks for ≥ 12 months without remission of ≥ 4 weeks [20].
Further uncertainty is experienced by the patient as HSA does not necessarily occur in every sexual encounter [7, 19]. A characteristic of HSA is the sporadic vulnerability of patients to the headache. Episodes can occur singly, in clusters or at irregular intervals. Recurrence can occur years later.
The acute HSA attacks are usually short lasting but the overall duration of pain can vary widely [17]. The mean duration of severe pain in HSA is similar (30 minutes) in type 1 and type 2 but the mean duration of milder pain is more prolonged with type 2 (4 hours vs 1 hour). About 15% of patients suffer from severe pain for >4hours needing acute treatment. Severe pain continuing for 2-24 hours occurs in up to 25% of patients with HSA [17]. Patients with episodic HSA compared to chronic HSA have an earlier age at onset and tend to suffer more often from concomitant BEH [20].
About 30% of patients report headaches with masturbation as well as intercourse. There are also reports of HSA occurring exclusively during masturbation [46, 47] and a case of this occurring with nocturnal emission [21].
Overall HSA occurs more commonly when the patient is tired, under stress or attempting intercourse for the second or third time in close succession [48]. HSA appears in bouts lasting weeks to months and can disappear without specific treatment [14, 16]. The number of attacks within one bout ranges from 2 to 50 [17]. About 25% of patients suffer attacks without longer remissions.
Prognosis
Prognosis is usually good for HSA as it is a benign self- limiting disorder and disappears without any specific treatment in the majority of patients [17]. It is usually better if there has been only one attack, especially if it was not associated with any other type of headache.
Frese [20] concluded that episodic HSA occurs in approximately 75% and chronic HSA in approximately 25% of patients. However even in chronic HSA, the prognosis is favourable, with remission rates of 69% in patients followed over 3 years.
Management
A thorough history and examination is mandatory in a first attack.
Referral is warranted if:
Atypical story and suspicious examination
First episode of severe headache where headache still present
A recurrent episode of severe headache with longer than average duration
Neck stiffness, photophobia or vomiting
Altered consciousness or confusion
Focal neurological signs
Previous history of AV malformation, neoplasms or neurosurgery
Investigations
Computed tomography
MRI
Lumbar puncture
Cerebral angiography
Urinary catecholamine
Medical treatment
Turner [49] has provided a good review.
Pre-emptive treatment
Propanolol hydrochloride ( Inderal) is effective in the prophylaxis of HSA[19]. Naratriptan 2.5mg has been reported as useful prior to sexual activity [50] but due to lower absorption rates needs to be taken more than 60 minutes before sexual activity [30]. Indomethacin 25-100mg can be taken 30-60 minutes prior to sexual activity [15, 16, 45, 51] and for acute severe pain management [20] but can cause serious gastrointestinal side-effects and is not tolerated by about 10% of headache patients [52].
Acute treatment
Triptans shorten the attack in about 50% of patients[30]. There is an 80% response rate [30]. Analgesics (ibuprofen, diclofenac, paracetamol, acetylsalicylic acid) given after onset of headache are of limited or no value in nearly all patients [45].
Other triptans, ergots and benzodiazepines have also been reported to have efficacy [5, 24, 53, 54] for acute and pre-emptive treatment for those patients not tolerating indomethacin. Taken 30 minutes before sexual activity they shorten orgasmic headache attacks in 66% of users [30].
Long term prophylaxis for longer lasting bouts or continued attacks
Options include indomethacin 25mg three times a day, propanolol 120-240mg per day, metoprolol 100-200 mg per day and diltiazem 180 mg per day [15, 19, 20, 22, 24, 45]. There is about an 80% response rate [30].
Sexual management
Trauma due to pain associated with sexual activity has the potential to affect immediate and long term satisfaction with sexual activity unless specifically addressed. HSA can be very distressing for both patient and partner with the development of fears around sexual activity and orgasm. Patients may develop patterns of impaired sexual arousal. If these fears are not exposed and dealt with, sexual problems may occur. Secondary avoidance behaviours may become established in the relationship leading to a decrease in couple’s physical affection, eroticism and sexual activity. Patients must be given the opportunity to talk about sexual fears in an ongoing way, especially if HSA is chronic.
The social and relationship history will disclose areas of stress which should be evaluated and managed as best possible. In type 1 HSA where neck and jaw tension may be a factor, conscious relaxation of these muscles during intercourse may help [7]. Relaxation exercises especially concentrating on neck and shoulder tension can be done regularly and particularly before anticipating sexual activity.
Individuals often sense early in the lovemaking process whether or not HSA will occur and encouragement not to pursue orgasm on that occasion can be helpful. Some patients can terminate the headache by stopping the sexual activity or suppressing orgasm and about 51% can lessen the intensity of pain by being more sexually passive [18].
Advice on continuing to engage with the partner despite ceasing or modifying one’s own sexual arousal needs to be given. Having a disappointed or resentful partner increases the distress of the condition so partner needs have to be discussed. Patients often have difficulty talking about sexual issues with both their partner and their doctor, therefore the doctor needs to be the one to raise the subject.
A brief sexual history will outline the love-making practice and modification to sexual positions, especially where neck tension is exaggerated, may help. In one report, the advice to engage in intercourse more frequently but less strenuously resulted in a reduction in headaches [5].
Avoiding sexual activity and strenuous activities until totally symptom free has been recommended by some [13, 22, 24, 55]. This may be difficult to follow as the capricious nature of HSA makes knowing when they have stopped difficult.
Conclusion
HSA are benign, but because they can mimic serious conditions, patients need to be properly assessed before reassurance is given and management of HSA started. Because pain can alter sexual experience and behaviour around sexuality for the patient and the couple, this aspect of patient wellbeing must be addressed by the treating physician for good holistic management. As not everyone is comfortable with addressing sexuality with patients, respectful acknowledgement of the situation and appropriate referral can be a useful approach.
The Western world is experiencing a rapid increase in the incidence of femoral neck fractures, from 50000 fractures in 1990 to a projected 120000 in 20151 as the age of the population increases. Hip fractures account for approximately 20 percent of orthopaedic bed occupancies in Britain at a total cost of up to £25000 per patient1.Around half of these fractures are intracapsular in nature of which two thirds are displaced.
The ideal surgical treatment for displaced intracapsular femoral neck fractures remains controversial with studies indicating a lack of consensus among treatment centres2,3. Options include reduction with internal fixation, cemented or cementless hemi-arthroplasty and total hip replacement. Internal fixation is less traumatic than arthroplasty but has a higher re-operation rate4,5 whilst cemented femoral prostheses are associated with a lower rate of revision compared to cementless implants. In addition there are statistically significant improvements in pain scores, walking ability, use of walking aids and activities of daily living within the cemented group6,7. The cementation process may however be associated with increased morbidity due to fat embolisation and increased length of operation8.
Treatment planning for intracapsular fractures, therefore, needs to take into account the patient’s medical fitness and activity level as well as the cost-effectiveness of the procedure.
Figure 1: Exeter Trauma Stem (ETS) Implant
The Exeter Trauma Stem is a new monoblock unipolar implant using an intermediate size 1.5, forty millimetre offset Exeter stem with a large head sized to match the patient’s anatomy (Figure 1, 2).
Figure 2: X-ray of ETS with correct length. Neck cut has been made 1cm above lesser trochanter with shoulder of prosthesis sunk below greater trochanter to ensure equal leg length
As yet there are no independent published series of the results of using this implant. Purported advantages of the ETS include the use of a tried and tested polished, tapered stainless steel stem with which many primary hip surgeons are familiar, ease of ‘cement-in-cement’ revision to a total hip replacement should the patient develop acetabular erosion and the relatively low cost of £240 compared to many contemporary cemented implants.
This study prospectively evaluates the first 50 ETS hemiarthroplasties performed at the Norfolk and Norwich University Hospital, UK over a six month period providing an indication of early outcomes and complications involved with the use of this prosthesis.
METHOD
Patients presenting to our unit with a displaced intracapsular femoral neck fracture who were sufficiently active to get out of their home independently, had an ASA grade of 1 or 2 and were not significantly cognitively impaired were treated with a cemented ETS prosthesis. In addition, patients with displaced intracapsular fractures associated with significant comminution of the medial femoral neck precluding the use of our standard calcar-bearing Austin Moore (Stryker Howmedica Osteonics Ltd) hemiarthroplasty were also treated with an ETS regardless of functional capability and medical condition.
The first fifty patients who underwent ETS hemiarthroplasty as a primary treatment for fractured neck of femur were included in the study. Four patients were excluded. Two of these patients had an ETS performed due to failure of cancellous screw fixation and two as part of a two stage revision for infected uncemented prosthesis.
All fifty procedures were performed with the patient in the lateral position via the modified lateral approach with the glutei incised at the musculotendinous junction. Cefuroxime was given on induction in each instance followed by two post operative doses at eight and sixteen hours after the procedure. Patients were scored by the hospital protocol for risk of thrombosis and were administered aspirin or subcutaneous low-molecular weight heparin as appropriate. All drains were removed between twenty-four and forty-eight hours and patients were mobilised within one day of operation as pain allowed.
Patient demographics and operative details were gathered both from the patients’ notes and from the ORSOS computerised theatre system.
Radiographic evaluation involved the Barrack9 cementation grading system, Dorr’s criteria10,11 including varus/valgus alignment of the prosthesis and leg length measurement.. Measurements of length and varus/valgus were performed using the PACS (GE Medical Systems 2005) digital imaging system by two orthopaedic registrars independent of one another.
Finally all fifty patients were sent an Oxford Hip Score12 at between two and four months postoperatively. Three patients died before the questionnaires were sent and of the remaining forty seven, there was a 98% response rate with 44 questionnaires completed solely by the patient and a further two completed with the aid of a carer.
RESULTS
1. Patient Demographics and operative details
Of the fifty patients in the study, thirty six were female and fourteen male. The mean age was 78 (range 38 to 99). Forty four ETS hemiarthroplasties were performed due to patient fitness and activity levels (Type 1 patients) with six undertaken in frail patients due to fracture extension into the calcar (Type 2). All type 1 patients were ASA grade 1 or 2 with all type 2 patients ASA grade 2-4. All type 1 patients had a mini-mental test score of 10/10 with type 2 patients ranging from 0-7.
The mean delay to surgery was 26 hours (9-58). Eight procedures were performed by consultants, thirty eight by registrars (training years three to six) and four by the trauma fellow under supervision by a senior. The mean operative time was sixty four minutes and the mean haemoglobin drop was 2.6 g/dl3 . Seven patients required post operative transfusion of either two or three units of packed cells.
Thirty four of the patients mobilised unaided pre-injury with eight using one stick, four using two sticks and four using a frame. Using the four categories above, the average drop in mobility from injury to discharge was 1.6 levels.
The average hospital stay was 8.6 days (range 5-69) with thirty five patients discharged to their own house, four to their own residential home and eleven to a rehabilitation ward.
2. Radiographic Evaluation
The cement mantle was firstly evaluated using Barrack’s grading:-
grade A: medullary canal completely filled w/ cement (white out). grade B: a slight radiolucency exists at the bone cement interface. grade C: a radiolucency of more than 50% at the bone cement interface. grade D: radiolucency involving more than 100% of the interface between bone and cement in any projection, including absence of cement distal to the stem tip
Post-operative radiographic evaluation according to this system showed that 54% of cement mantles were Barrack grade B (27 cases) with the majority of the remainder grade C (12 cases) and grade A (eight cases). Two were graded as D.
Dorr’s criteria were employed firstly to assess whether there was an adequate cement thickness of 3mm in Gruen zones 3 and 7 and of one centimetre distal to the tip of the prosthesis. Thirty-four prosthesis scored 3/3, nine scored two, four scored one and two scored none.
Dorr’s criteria also assess position of the prosthesis using the AP radiograph. Ten prostheses were placed in a neutral position related to the femoral shaft. Seven were placed in 1-2 degrees of varus, twenty-seven were placed in 1-2 degrees of valgus and five were placed in 3-6 degrees of valgus.
There were equal leg length measurements in nineteen patients post-operatively with two patients left 5-10mm short on the operated side. Twenty-eight patients were left long with a mean lengthening of 12mm (5-30) and of these five were left between 20 and 30mm long one of which was irreducible and needed to be revised on the table.
3. Post-operative Scoring
The Oxford Hip Score contains 6 questions relating to pain and six relating to function and mobility which are scored 1 point for the best outcome and five for the poorest (Score 12-60). The average pain score was 12.0 and the average functional score was 15.2 giving an overall score of 27.2. The type 1 patients fared better with an average score of 25.3, the average score for type 2 patients was 44.3
4. Complications
The one immediate complication was the need for an on-table revision due to an irreducible prosthesis.
There was one superficial wound infection requiring antibiotic therapy and one early deep infection requiring open washout in theatre which resolved the infection in combination with antibiotic therapy.
There were three deaths (one CVA, one MI and one from pneumonia) all of which occurred between 30-90 days from the operative procedure.
DISCUSSION
The cohort of patients included in this study was similar to other studies with regards to male:female ratio, age and cognitive function4,5. The patients also experienced a delay to surgery and length of operation similar to previous studies4,7. The length of inpatient stay, however, was markedly better at 8.6 days compared to approximately fourteen to twenty-one days cited in the literature13,14.
The length of operation, post-operative mobility and transfusion requirements were also similar to studies evaluating hemiarthroplasty outcomes4,5.
Post-operative radiographic evaluation showed greater than 50% of cement mantles were Barrack grade B with the majority of the remainder grade C (24%) and A (16%). There was no statistical difference between our findings and those of an 8-12 year study of the Exeter stem in total hip replacement15. The two Barrack D grade cement mantles were in patients who became unwell intra-operatively and the decision was taken not to pressurise during cementation.
Figure 3: Original ETS broach with squared off handle, not allowing intra-operative trialling
The major difficulty evident from this study is the correct positioning of the ETS prosthesis with regards to restoration of accurate leg length which the authors believe was due to two reasons. Firstly, the original set for the Exeter Trauma Stem comes with one femoral broach (Fig 3) which does not allow trial reduction. Therefore positioning of the prosthesis required intra-operative estimation of the correct leg length which can be difficult with hip fractures as the leg length is abnormal at the commencement of surgery. Therefore the centre of rotation of the femoral head on the injured side was approximated by comparison with the contralateral side on the pelvic AP radiograph and referenced against the level of the greater trochanter during the procedure.
Secondly, because the large monoblock head of the ETS is matched to the patient’s own femoral head anatomy, the diameter of the ETS head is generally around 15-30mm wider than the 28mm heads commonly used with the Exeter stem in elective hip arthroplasty. Therefore care must be taken to sink the stem by a corresponding amount if a similar neck cut is used or the femoral neck osteotomy should be made at a more distal level. This often involves positioning the shoulder of the ETS stem below the level of the greater trochanter. This can mislead surgeons who are familiar with the Exeter stem as placing the ETS stem in a similar position to that employed with smaller head elective arthroplasty results in limb lengthening. Figure 4 shows a leg length discrepancy of 15mm despite a low neck cut as the stem has not been sunk sufficiently. This led to 56% of patients being left with true lengthening of the operated limb and one prosthesis irreducible. It is difficult to assess whether this is a common problem in the literature with other hemiarthroplasties used for femoral neck fractures as none of the comparable studies comment on clinical or radiographic assessment of leg length.
Figure 4: X-ray of ETS with limb lengthening. Although the neck cut has been made relatively low in relation to the lesser trochanter, the shoulder of the prosthesis slopes marginally above the greater trochanter, inadvertently lengthening the operated limb.
One major advantage to the tapered Exeter stem is the ease with which conversion to a total hip replacement can be performed using an in-cement technique16. Many of the patients included in this study were below the age of 70 and a proportion could be expected to outlive the prosthesis especially with regards to acetabular erosion4. Whilst none of this cohort has required revision for loosening, the irreducible Exeter implant was revised on-table using this technique without further complication.
Post operative Oxford Hip Scores were encouraging with no difference between our mean score of 27.2 and other studies evaluating both cemented hemiarthroplasty and total hip replacement following femoral neck fracture12,17,18.
The mortality rate was 6% six to twelve months post surgery with all three deaths more than one month post surgery and apparently unrelated to the surgery itself. Overall mortality rates following neck of femur fracture are approximately thirty percent at one year however studies specifically looking at outcomes following cemented hemiarthroplasty in the fit and active patient have found mortality rates similar to this study5,19.
Costing around £240, the ETS is a relatively cheap prosthesis in comparison to cemented bipolar prosthesis depite the additional expense of a cement restrictor, bone cement, cement gun and cement pressurisers.
In conclusion, the Exeter Trauma Stem (ETS) is an effective method of treating displaced intracapsular neck of femur fractures with encouraging post-operative functional, pain and radiographic scoring outcomes. The message highlighted by this study is that additional care is needed with regards to the correct positioning of the prosthesis to ensure the restoration of limb length. Subsequent to discussion with the Stryker representative regarding the results of this study, a second generation trialling system has been added to the set with a modular broach. The authors suggest that not only should these modular broaches be used, but also accurate pre-operative planning is needed to ensure equal leg lengths post-operatively.
Definition of restraint: a device or medication that is used to restrict a patient’s voluntary movement. Prevalence of physical restraints: up to 17% in acute care settings. Prevalence of chemical restraints: up to 34% psychotropic drug use in long term care facilities. Complications of restraints: include documented falls, decubitus ulcers, fractures, and death. Regulations: require documentation of indications plus failure of alternatives by a licensed professional. Prevention of removal of life sustaining treatment: is a relatively clear indication for restraints. Informed consent: including consideration of risks, benefits, and alternatives is necessary in all cases. Barrier to reducing restraints: a misguided belief that, by use, one is preventing patient injury. Steps can be taken to limit their use: including an analysis of behaviours precipitating their use.
Case study
A 79 year old female nursing home resident with frontotemporal dementia and spinal stenosis has a chronic indwelling catheter for cauda equina syndrome and neurogenic bladder. Attempts to remove the catheter and begin straight catheterization every shift were met by the patient becoming combative with the staff. Replacing the catheter led to repeated episodes of the patient pulling out the catheter. The patient lacks decision making capacity to weigh the risks, benefits, and alternatives; but she clearly doesn’t like having a catheter in. The attending physician instituted wrist restraints pending a team meeting. Unfortunately, attempts by the patient to get free led to dislocation of both shoulders and discharge to the hospital.
Introduction
A restraint is any device or medication used to restrict a patient’s movement. In the intensive care unit, for example, soft wrist restraints may be used to prevent a patient from removing a precisely placed endotracheal tube. A lap belt intended to prevent an individual from falling from a wheelchair in a nursing home is a restraint if the patient is unable to readily undo the latch.1 In the case study above of a catheterized, demented patient, if medication is used to prevent the patient from striking out at staff when performing or maintaining catheterization, then the medication is considered a restraint.
There is little data on efficacy and benefits of restraints1. Even when the indication to use a restraint is relatively clear, the outcome is often opposite of the intention. Consider that restraints used for keeping patients from pulling out their endotracheal tubes are themselves associated with unplanned self- extubation2. Complications of restraints can be serious including death resulting from medications or devices3,4.Use of restraints should be reserved for documented indications, should be time limited, and there should be frequent re-evaluation of their indications, effectiveness, and side effects in each patient. Lack of a Food and Drug Administration (FDA) approved indication for use of medications as restraints in agitated, aggressive, demented patients has led to recommendations that medications in these situations be used only after informed consent with proxy decision makers5. Medical, environmental, and patient specific factors can be root causes of potentially injurious behavior to self or others as in the case study above. To ensure consideration and possible amelioration of these underlying causes, the Center for Medicare and Medicaid Services (CMS ) in 2006 required face to face medical and behavioral evaluation of a patient within one hour after restraints are instituted by a physician (licensed independent practitioner). As a result of controversy surrounding this rule, clarification of that rule in 2007 allowed for a registered nurse or physician assistant to perform the evaluation provided that the physician is notified as soon as possible6 . In depth situational analysis of the circumstances surrounding the use of restraints in individual cases as well as education of the patient, family, and caregivers may lead to the use of less restrictive alternatives7.
Frequency of restraint use
Frequency of restraint use depends on the setting, the type of restraint, and the country where restraint use is being studied. In the acute care hospital setting, reported physical restraint use was 7.4% to 17%.a decade ago8.Two decades ago, in long term care facilities prevalence was reported as 28%-37%.9 . There has been a steady decline over the past several decades coincident with regulation such that, according to the Department of Health and Human Services, it is down to about 5% since newer CMS rules went into effect in 2007. In contrast, some European nursing homes still report physical restraint use from 26% to 56%10,11.
Chemical restraint is slightly more prevalent than physical restraint with a prevalence of up to 34% in long term care facilities in the US prior to regulations12.There is some indication that prevalence may be decreasing, some say markedly, perhaps as a result of government regulation13,12 .Interestingly, one case-control study of more than 71,000 nursing home patients in four states showed that patients in Alzheimer special care units were no less likely to be physically restrained compared to traditional units. Furthermore, they were more likely to receive psychotropic medication14.
Complications of restraint use
The use of chemical and physical restraints is associated with an increase in confusion, falls, decubitus ulcers, and length of stay15,16. Increase in ADL dependence, walking dependence, and reduced cognitive function from baseline has also been reported17. Use of restraints often has an effect opposite the intended purpose of protecting the patient, especially when the intent is prevention of falls18. Physical restraints have even caused patient deaths. These deaths are typically due to asphyxia when a patient, attempting to become free of the restraint, becomes caught in a position that restricts breathing4,19.
Antipsychotic medications may be used as restraints in elderly patients with delirium or dementia who become combative and endanger themselves and others; however, there is no FDA approval for these drugs for this use5. In a meta-analysis, an increased relative risk of mortality of 1.6 to 1.7 in the elderly prompted the FDA to mandate a “black box” label on atypical antipsychotic medications stating that they are not approved for use in the behavioral manifestations of dementia20. Other research suggests that conventional antipsychotics are just as likely to cause death, if not more so3. Forensic research also links antipsychotic medication and patient deaths21. The reported relative risk of falls from these drugs is 1.722. Given the risks, if antipsychotic medications are used at all, they need to be prescribed as part of a documented informed-consent process. Education of patients, families of patients, and facility staff about the harms of restraints is a good first step in a plan to avoid or eliminate their use. Over the past several decades, regulations have arisen in the United States because of complications of restraints and a lack of clear evidence supporting their use.
The regulatory environment in the United States
The Omnibus Budget Reconciliation Act of 1987 (OBRA 87) resulted in regulations that specify the resident’s right to be free of the use of restraints in nursing homes when used for the purpose of discipline or convenience and when not required to treat the resident’s medical symptoms23,24. OBRA87 related regulations also specified that uncooperativeness, restlessness, wandering, or unsociability were not sufficient reasons to justify the use of antipsychotic medications. If delirium or dementia with psychotic features were to be used as indications, then the nature and frequency of the behavior that endangered the resident themselves, endangered others, or interfered with the staff’s ability to provide care would need to be clearly documented24. Comprehensive nursing assessment of problem behaviors, a physician order before or immediately after instituting a restraint, and documentation of the failure of alternatives to restraint are required before the use of a restraint is permitted. The restraint must be used for a specific purpose and for a specified time, after which reevaluation is necessary.
The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) instituted similar guidelines that apply to any hospital or rehabilitation facility location where a restraint is used for physical restriction for behavioral reasons25. In response to the 1999 Institute of Medicine report, To Err is Human, JCAHO focused on improving reporting of sentinel events to increase awareness of serious medical errors. Not all sentinel events are medical errors, but they imply risk for errors as noted in the revised 2007 JCAHO sentinel event definition: A sentinel event is an unexpected occurrence involving death or serious physical or psychological injury, or the risk thereof6. The JCAHO recommends risk reduction strategies that include eliminating the use of inappropriate or unsafe restraints. The recommendations for restraint reduction are prioritized along with items like eliminating wrong site surgery, reducing post-operative complications, and reducing the risk of intravenous infusion pump errors6. It is clear that JCAHO considers placing restraints as a sentinel event to be monitored and reported. CMS and JCAHO have worked to align hospital and nursing home quality assurance efforts especially with respect to the standard concerning face to face evaluation of a patient within one hour of the institution of restraints. They held ongoing discussions that resulted in revised standards for the use of restraints in 200926. Among the agreed upon standards are: policies and procedures for safe techniques for restraint, face to face evaluation by a physician or other authorized licensed independent practitioner within one hour of the institution of the restraint, written modification of the patient’s care plan, no standing orders or prn use of restraints, use of restraints only when less restrictive interventions are ineffective, use of the least restrictive restraint that protects the safety of the patient, renewal of the order for a time period not to exceed four hours for an adult, restraint free periods, physician or licensed independent practitioner daily evaluation of the patient before re-ordering restraint, continuous monitoring, and documentation of strategies to identify environmental or patient specific triggers of the target behavior. The one hour face to face evaluation may be accomplished by a registered nurse provided that the attending physician is notified as soon as possible26.
Indications for use of restraints
The risk of using a restraint must be weighed against the risk of not using one when physical restriction of activity is necessary to continue life-sustaining treatments such as mechanical ventilation, artificial feeding, or fluid resuscitation. Every attempt should be made to allow earlier weaning from these treatments, thereby rendering the restraint unnecessary. Even in cases where the indication is relatively clear, the risks, benefits, and alternatives must be weighed (see Figure).
In an emergency, when it is necessary to get a licensed provider’s order for a restraint to prevent a patient from disrupting lifesaving therapy or to keep a patient from injuring others, an analysis of what may be precipitating the episode is essential. Are environmental factors such as noise or lighting triggering the behavio? Are patient factors such as pain, constipation, dysuria, or poor vision or hearing triggering the disruptive behavior? Is there an acute medical illness? Is polypharmacy contributing? Psychotropic drugs and drugs with anticholinergic activity are common culprits. Patient, staff, family, and other health care providers need to be queried.
One must guard against perceiving the continued need for life-sustaining treatment and the use of restraints as being independent factors, because that misconception can lead to a vicious cycle. For example, a patient who has persistent delirium from polypharmacy and needs artificial nutrition and hydration which perpetuates the need for continued chemical and physical restraints. Correcting the polypharmacy and the restraint as a potential cause of the delirium can break the cycle. When restraints are indicated, one must use the least-restrictive restraint to accomplish what is needed for the shortest period of time. Restraint-free periods and periodic reassessments are absolutely required.
A weaker indication is the use of restraints to prevent patient self-injury when the danger is not imminent. Such an indication exists when a patient repeatedly attempts unsafe ambulation without assistance or when he or she cannot safely ambulate early in the process of rehabilitation from deconditioning or after surgery. In these cases, weighing the risks and benefits of the restraint is more difficult than when considering restraints to maintain life-sustaining treatment.
Even more difficult to justify is the use of restraints to restrict movement to provide nonurgent care. An example might be a patient who repeatedly removes an occlusive dressing for an early decubitus ulcer. In these cases, it is more fruitful to use alternatives to restraints. For example, considering alternatives to a urinary catheter is more important than documenting that restraints are indicated to keep the patient from pulling it out.
If used, the specific indication, time limit, and plan for ongoing reevaluation of the restraint must be clearly documented. Effectiveness and adverse effects must be monitored. Restraint-free periods are also mandatory. The same is true for chemical restraints. Periodic trials of dosage reduction and outcome are mandatory.
Barriers to reducing the use of restraints
Perceived barriers to reducing restraints can be thought of as opportunities to build relationships between patients, physicians, staff, patients’ families, and facility leaders. A legitimate fear of patient injury, especially when the patient is unable to make his or her own decisions, is usually the root motivation to use restraints. Ignorance about the dangers of restraint use results in a sincere, but misguided, belief that one is acting in the patient’s best interest27. Attempts to educate physicians, patients, and staff may not have been made. These barriers are opportunities for the community to work together in creative partnerships to solve these problems. Even in communities where there are no educational institutions, there are opportunities for educational leadership among physician, nursing, and other staff. Conversely, lack of commitment to reducing restraints by institutional leaders will tend to reinforce the preexisting barriers. Regulatory intervention has been a key part of gaining the commitment of institutional leadership when other opportunities were not seized. On the other hand, competing regulatory priorities such as viewing a serious fall injury as a ‘never event’ and simultaneously viewing institution of a restraint as a sentinel event may lead to reduced mobility of the patient18. An example of this would be the use of a lap belt with a patient-triggered release. The patient may technically be able to release the belt, but the restricted mobility may lead to deconditioning and an even higher fall risk when the patient leaves the hospital. In the process of preventing the serious fall injury or ‘never event’ there is, even at the regulatory level, intervention that may not be in the patient’s best interest. These good intentions are, again, a barrier to the reduction of the use of restraints and an opportunity for physician leadership in systems based care collaboration. Physician leadership probably needs to extend beyond educational efforts. Evidence suggests education may be necessary but not sufficient to reduce the use of restraints10.
Reducing the use of restraints
Steps can be taken to reduce the use of restraints before the need for them arises, when the need for restraints finally does arise, and while their use is ongoing.
Programs to prevent delirium, falls in high-risk patients, and polypharmacy are all examples of interventions that may prevent the need for restraints in the first place. Attention to adequate pain control, bowel function, bladder function, sleep, noise reduction, and lighting may all contribute to a restraint-free facility.
When a restraint is deemed necessary, a sentinel event has occurred. Attempts to troubleshoot the precipitating factors must follow. Acute illness such as infection, cardiac, or respiratory illness must be considered when a patient begins to demonstrate falls or begins to remove life-sustaining equipment. Highly individualized assessment of the patient often requires input from physical therapy, occupational therapy, social work, nursing, pharmacy, and family. If root causes are determined and corrected, the need for restraints can be ameliorated and alternatives can be instituted.
The least restrictive alternative should be implemented when needed. For example, a lowered bed height with padding on the floor can be used for a patient who is at risk for falls out of bed in contrast to the use of bedrails for that purpose. Another example is the use of a lap belt with a Velcro release as opposed to a vest restraint without a release. A third example is the use of a deck of cards or a lump of modeling clay to keep the patient involved in an alternative activity to the target behavior that may be endangering the patient or staff. Alternatives to the use of restraints need to be considered both when restraint use is initiated and during their use. Judicious use of sitters has been shown to reduce falls and the use of restraints28. When danger to self or others from patient behaviors and restraints are deemed necessary, a tiered approach has been recommended by Antonelli29 beginning with markers and paper or a deck of cards for distraction and then proceeding up to hand mitts, lap belts, or chair alarms if needed. Vest or limb restraints are the default only when other methods have been ineffective29.
Literature from the mental health field provides some guidance to those attempting to use the least intrusive interventions for older patient behaviors that endanger themselves or others. A combination of system-wide intervention, plus targeted training in crisis management to reduce the use of restraints has been demonstrated to be effective in multiple studies30. In a recent randomized controlled study, one explanation the author gives for the ineffectiveness the educational intervention is that the intervention was “at the ward level unlike other restraint reduction programs involving entire organizations.”10. Research and clinical care in restraint reduction will likely need to be both patient-centered and systems-based in the future.
Case study revisited
Our 79 year old female with frontotemporal dementia and spinal stenosis noted in the above case pulls out her urinary catheter. The physician is called and determines that the patient’s urine has been clear prior to the episode, that she has no fever, nor does she have evidence of acute illness. The patient is likely pulling the catheter out simply because of the discomfort caused by the catheter itself since the patients behavior is at the same baseline as before the catheter was inserted as determined by discussion with the staff. The patient is unable to inhibit her behavior because of the frontotemporal dementia. The physician places a call to the medical power of attorney and explains the risks of bladder infection, bladder discomfort, renal insufficiency, and overflow incontinence from untreated neurogenic bladder. This is weighed against the risk of frequent infections and bladder discomfort from a chronic indwelling urinary catheter, or damage to the urethra from pulling the catheter out. The option of periodic straight catheterization is dismissed by the medical power of attorney as being too traumatic for this demented patient who becomes agitated during this procedure.
The medical power of attorney considers the options and agrees to observation by the staff without the catheter overnight with a team conference the next day. At the conference, it was noted that overnight the patient had several episodes of overflow incontinence in spite being toileted every few hours while awake. The patient had no signs of discomfort and was changed when found to be wet. A bladder scan done at the facility showed a few hundred cubic centimeters of residual urine after the patient was noted wet and changed. The team conference yielded the informed decision to continue checking the patient frequently and changing when wet as well as frequent toileting opportunities.
The patient continued at baseline for twelve weeks until she developed urinary sepsis and the patient’s medical power of attorney was contacted about additional care decisions.
Conclusion
A restraint is any device or medication used to restrict a patient’s movement. Complications of restraints can be serious including death resulting from both medications and devices. Use of restraints should be reserved for documented indications, should be time limited, and there should be frequent re-evaluation of their indications, effectiveness, and side effects in each patient. Analysis of environmental and patient specific root causes of potentially self-injurious behavior can lead to reduction in the use of restraints. Education of the patients, families, and the health care team can increase the use of less restrictive alternatives.
In the first part of this review we considered neurodegenerative and neurobehavioural diseases and the findings that these diseases commonly are associated with systemic and central nervous system bacterial and viral infections.1 In this second part we continue with psychiatric diseases, autoimmune diseases, fatiguing illnesses, and other chronic diseases where chronic infections play an important role.
Psychiatric diseases
Borrelia-associated psychiatric disorders
In addition to neurologic and rheumatologic symptoms Borrelia burgdorferi has been associated with several psychiatric manifestations2, 3 (see also below). Such infections can invade the central nervous system and may cause or mimic psychiatric disorders or cause a co-morbid condition. A broad range of psychiatric conditions have been associated with Lyme disease, including paranoia, dementia, schizophrenia, bipolar disorder, panic attacks, major depression, anorexia nervosa and obsessive-compulsive disorder.4-7 For example, depressive states among patients with late Lyme disease are fairly common, ranging from 26% to 66%.3 It is not known whether B. burgdorferi contributes to overall psychiatric morbidity, but undiagnosed chronic Lyme disease caused by this spirochete is considered a differential diagnosis in patients with certain psychiatric symptoms such as depressive symptoms, lack of concentration and fatigue.
The neuropsychiatric sequelae of chronic Lyme disease remains unclear. Studies were performed, some on large numbers of patients, to investigate whether a correlation exists between chronic Lyme disease (defined by seropositivity) and psychiatric disorders.8-11 Interestingly, different results were reported on the association between B. burgdorferi infection and psychiatric morbidity.8-11 For example, Hájek et al.8 compared the prevalence of antibodies to B. burgdorferi in groups of psychiatric patients and healthy subjects. Among the matched pairs, 33% of the psychiatric patients and 19% of the healthy comparison subjects were seropositive. In contrast, Grabe et al.11 did not find an association between Borrelia seropositivity and mental and physical complaints. In 926 consecutive psychiatric patients that were screened for antibodies and compared with 884 simultaneously recruited healthy subjects, seropositive psychiatric patients were found to be significantly younger than seronegative ones, and this was not found in the healthy controls.10 However, none of the psychiatric diagnostic categories used in this study exhibited a stronger association with seropositivity.10 These findings suggest a potential association between B. burgdorferi infection and psychiatric morbidity, but fail to identify any specific clinical 'signature' of the infection. This might be due to the very low incidence in an endemic region (0.2%, CI 95% 0.0% to 1.1%) as demonstrated in 517 patients hospitalized for psychiatric diseases.9
In addition to serological data, clinical evidence for the association of psychiatric symptoms and post-Lyme disease has also been investigated. If mental and physical complaints in patients were assessed with the von Zerssen's complaint scale using multivariate analyses, the data revealed that definitions of seropositivity were not associated with increased mental or physical complaints.11 In contrast, if the SF-36 was used to determine Quality of Life (QOL) in post-Lyme patients, the average SF-36 physical component summary (40±9, range 29-44) and mental component summary (39±14, range 23-46) of the QOL assessment were worse than the general USA population, and they could be significantly improved by anti-Lyme antibiotics (46% versus 18%, p=0.007).5 Barr et al.12 examined the relation between complaints of memory disturbance and measures of mood and memory functioning in 55 patients with serological evidence of late-stage Lyme borreliosis. There was a significant correlation between subjective memory ratings and self-reported depression (p<0.001) but not with objective memory performance, indicating memory disturbance in chronic Lyme patients. Using a structured psychiatric interview, the Positive and Negative Affect Schedule, the Lyme Symptom Checklist, and a battery of neuropsychological tests in 30 post-Lyme patients, participants did not appear to have an elevated incidence of psychiatric disorders or psychiatric history.13 Their mood, however, was characterized by lowered levels of positive affect and typical levels of negative affect that were similar to affect patterns in individuals with chronic fatigue syndrome (CFS). Similarly, Hasset et al.4, 7 reported on 240 consecutive post-Lyme patients who were screened for clinical psychiatric disorders, such as depression and anxiety. After adjusting for age and sex, these disorders were more common in symptomatic patients than in the comparison group (Odds Ratio=3.54, CI 95% 1.97-6.55, p<0.001), but personality disorders were comparable in both groups.
Although psychiatric co-morbidity and other psychological factors are prominent in post-Lyme patients, it remains uncertain whether these symptoms can be directly attributed to the chronic course of Borrelia infections or to other chronic illness-related factors.
Schizophrenia
Several microbes have been suspected as pathogenetic factors in schizophrenia, such as Chlamydia species, toxoplasma, and various viruses. For example, a number of studies have reported associations between Toxoplasma gondii infection and the risk of schizophrenia with an overall hazard ratio of 1.24.14 In addition, chlamydial infections have been found in 40% of schizophrenic patients compared to 7% in healthy controls.15 These infections represented the highest risk factor yet found to be associated with schizophrenia that was highly significant (Odds Ratio=9.43, p=1.39 x 10-10), especially with Chlamydophila psittaci (Odds Ratio=24.39, p=2.81 x 10-7). Interestingly, schizophrenic carriers of the HLA-A10 genotype were clearly the most often infected with Chlamydia, especially C. psittaci (Odds Ratio=50.00, p=8.03 x 10-5), pointing to a genetically related susceptibility.15 However, skepticism against the role of bacterial infection in schizophrenia was also fostered by the low impact of anti-infectious treatment on the course of disease progression in schizophrenia.16
Genetic backgrounds and viral infections and/or reactivations as well as cytokine-related pathomechanisms have also been proposed as causative for psychiatric disorders, such as schizophrenia. Specific genetic patterns of MICB polymorphism (MHC class I polypeptide-related sequence B, chromosome 6p21) were identified in patients seropositive for CMV and HSV-1.17 Similar polymorphisms were found for the COMT Val158Met related to serological evidence of HSV-1 infections in individuals with bipolar disorder.18 This serologic evidence of HSV-1 infection appeared to be associated with cognitive impairment in individuals with bipolar disorders19 and was found to be an independent predictor of cognitive dysfunction in individuals with schizophrenia.20 In addition, viral exposure during gestation has been described as a risk factor for schizophrenia. Offspring of mothers with serologic evidence of HSV-2 infection were at significantly increased risk for the development of psychoses (Odds Ratio=1.6; CI 95% 1.1-2.3). These results are consistent with a general model of risk resulting from enhanced maternal immune activation during pregnancy.21 However, this was not confirmed in another study.22 Similar contradictory results were observed in a small group of 8 patients with schizophrenia where reactivation of herpesviruses (HSV-1, CMV, EBV, varicella-zoster virus and human HHV-6) and other viruses (measles, rubella, mumps, influenzaA and B and Japanese encephalitis viruses) during acute onset or exacerbation of schizophrenia was investigated, but none of these viruses were detected in these patients.23 Also, a search for HSV-1 or varicella zoster virus infection in postmortem brain tissue from schizophrenic patients did not reveal evidence of persistent CNS infections with these viruses.24
Schizophrenic patients show a number of cytokine changes that may be important in their condition. For example, differences in interleukin-2, -4 and –6, among other cytokines, have been seen in schizophrenic patients.25-27 Often these changes in cytokines or cytokine receptors have been linked to associated genetic changes found in schizophrenia.28-30 Monji et al.31 recently reviewed the evidence for neuroinflammation, increases in pro-inflammatory cytokines and genetic changes in schizophrenia and concluded that these changes are closely linked to activation of microglia. Although the microglia comprise only about 10% of the total brain cells, they respond rapidly to even minor pathological changes in the brain and may contribute to neurodegeneration through the production of pro-inflammatory cytokines and free radicals. CNS infections could also activate microglia and cause similar events.
Neuropsychiatric Movement Disorders
Giles de la Tourette’s syndrome (TS) is a neurological condition that usually begins in childhood and results in involuntary sounds or words (vocal tics) and body movements (movement tics). An association between infection and TS has been repeatedly described.32 Abrupt onset of the disease, usually after infection, was noted in up to 11% of these patients.33-34 A role for streptococcal infections (PANDAS, see below) as causative or mediating agent in TS was established several years ago.35 Additionally, the involvement of other infectious agents, such as B. Burgdorferi or M. pneumoniae, has been described in case reports and small studies. For example, comparing 29 TS patients with 29 controls revealed significantly elevated serological titers in TS patients (59% versus 3%). This higher proportion of increased serum titers, especially IgA titers, suggested a putative role for M. pneumoniae in a subgroup of patients with TS.36 In predisposed persons, infection with various agents including M. pneumoniae should be considered as at least an aggravating factor, but an autoimmune reaction has to be taken into account in TS patients. In addition, co-infections with toxoplasmosis have been described in a few case reports of obsessive-compulsive disorder (OCD).37 As mentioned above, streptococcal infections are likely to play a pivotal role in these syndromes.35
The pathogenic mechanism may be secondary to an activation of the immune system, resulting in an autoimmune response. This will be discussed in the next section.
Autoimmune Diseases
Infections are associated with various autoimmune conditions.38-40 Autoimmunity can occur when infections like cell-wall-deficient bacteria are released from cells containing parts of cell membranes that are then seen as part of a bacterial antigen complex, or bacteria can synthesize mimicry antigens (glycolipids, glycoproteins or polysaccharides) that are similar enough in structure (molecular mimicry) to stimulate autoimmune responses against similar host antigens. Alternatively, viral infections can weaken or kill cells and thus release cellular antigens, which can stimulate autoimmune responses, or they can incorporate molecules like gangliosides into their structures.
In addition to molecular mimicry, autoimmunity involves several other complex relationships within the host, including inflammatory cytokines, Toll-like receptor signalling, stress or shock proteins, nitric oxide and other stress-related free radicals, among other changes that together result in autoimmune disease.38, 39
Guillain-Barré syndrome
Guillain-Barré syndrome (GB) is a demyelinating autoimmune neuropathy often associated with bacterial infections.40 Symptoms include pain, muscle weakness, numbness or tingling in the arms, legs and face, trouble speaking, chewing and swallowing. Of the types of infections found in GB, Campylobacter jejuni, Mycoplasma pneumoniae and Haemophilus influenzae are often found.39 For example, Taylor et al.41 found serological evidence of C. jejuni in 5 of 7 patients with GB and other motor neuropathies, and Gregson et al.42 found anti-ganglioside GM1 antibodies that cross-reacted with C. jejuni liposaccharide isolates. When infections were examined in GB cases in India, Gorthi et al.43 found that 35% and 50% of GB patients had serological evidence of C. jejuni and M. penumoniae infections, respectively, while one-third of cases showed evidence of both infections. In Japan Mori et al.44 found that 13% of GB patients had antibodies against Haemophilus influenzae . Autoantibodies stimulated by infections found in GB patients can cross-react with nerve cell gangliosides (anti-GM1, anti-GM1b, anti-GD1a, among others), and these are thought to be important in the pathogenesis of GB.45 Indeed, injection of C. jejuni lipo-oligosaccharide into rabbits induces anti-gangliosides and a neuropathy that resembles acute motor axonal neuropathy.46
Viruses have also been found to be associated with GB.40 Examples are: CMV,47 HIV,48 herpes simplex virus,49 West Nile virus,50 and HHV-6.51
Paediatric autoimmune neuropsychiatric disorders associated with Streptococci ('PANDAS')
Streptococcal infections in children are usually benign and self-limited. In a small percentage of children, however, prominent neurologic and/or psychiatric sequelae can occur. Post-streptococcal basal ganglia dysfunction has been reported with various manifestations, all of which fall into a relatively well-defined symptom complex or syndrome called paediatric autoimmune neuropsychiatric disorders associated with streptococcal infection (PANDAS).52
Evidence from past studies indicates that adults and children with a symptom course consistent with PANDAS experience subtle neuropsychological deficits similar to those of primary psychiatric diagnosis of OCD and TS.53 PANDAS are now considered as a well-defined syndrome in which tics (motor and/or vocal) and/or OCD are consistently exacerbated in temporal correlation to a group A beta-hemolytic streptococcal infection. However, the pathological relationship between OCD or tics/TS in childhood to antecedent group A Streptococci is still not fully understood.52
In an epidemiological investigation Leslie et al.54 assessed whether antecedent streptococcal infection(s) increase the risk of subsequent diagnosis of OCD, TS, other tic disorders, attention-deficit hyperactivity disorder (ADHD) or major depressive disorder (MDD). Children with newly diagnosed OCD, TS, or tic disorder were more likely than controls to have had a diagnosis of streptococcal infection in the previous year (Odds Ratio=1.54, CI 95% 1.29-2.15). Previous streptococcal infection was also associated with incident diagnoses of ADHD (Odds Ratio=1.20, CI 95% 1.06-1.35) and MDD (Odds Ratio=1.63, CI 95% 1.12-2.30).54 Similar results were found in a retrospective, cross-sectional, observational study of 176 children and adolescents with tics, TS, and related problems.55 In a case-control study of children 4 to 13 years old patients with OCD, TS, or tic these disorders were more likely than controls to have had prior streptococcal infection (Odds Ratio=2.22; CI 95% 1.05-4.69) in the 3 months before onset date. The risk was higher among children with multiple streptococcal infections within 12 months (Odds Ratio=3.10; CI 95% 1.77-8.96).56 Having multiple infections with group A beta-hemolytic Streptococcus within a 12-month period was associated with an increased risk for TS (Odds Ratio=13.6; CI 95% 1.93-51.0). Similar results were found in patients with typical symptoms of Tourette's syndrome.57 The frequency of elevated anti-streptolysin O titers was also significantly higher (p=0.04) in patients with attention-deficit hyperactivity disorder (64%) than in a control group (34%).58
Sydenham's chorea is one manifestation of post-streptococcal neuropsychiatric movement disorders. A pathogenic similarity between Sydenham's chorea, TS and other PANDAS has been suggested since some patients can present with one diagnosis and then evolve with other neuropsychiatric conditions.59 These observations support a role of group A streptococcal infection and basal ganglia autoimmunity. Anti-basal ganglia antibodies that are associated with serologic evidence of recent streptococcal infection were found as potential diagnostic markers for this group of disorders, which includes Sydenham's chorea as the prototype.60
However, contradictory results were also reported.61 For example, an association between symptom exacerbations and new group A beta-hemolytic streptococcus infections among 47 paediatric patients with TS and/or OCD was not observed.59 In addition, the failure of immune markers for streptococcal infections to correlate with clinical exacerbations in a small study of children with paediatric autoimmune neuropsychiatric disorders raised concerns about the viability of autoimmunity as a pathophysiological mechanism in these syndromes.62 However, in a second study the same group reported that patients who fit published criteria for paediatric autoimmune neuropsychiatric disorders associated with streptococcal infections represented a subgroup of those with chronic tic disorders and OCD. These patients may be vulnerable to group A beta-hemolytic Streptococcus infection as a precipitant of neuropsychiatric symptom exacerbations. 63
Taken together, these findings provide epidemiologic evidence that some paediatric-onset neuropsychiatric disorders, including OCD, tic disorders, ADHD, and MDD, may be, at least partially, related to prior streptococcal infections. Group A beta-hemolytic Streptococcus infections are likely not the only event associated with symptom exacerbations for PANDAS patients, but they appear to play a role at least in a subgroup of these children. A potential genetic susceptibility for these post-infectious complexes has been recently proposed.64
The recent recognition that these paediatric neurobehavioural syndromes have infectious and/or immunologic triggers has pointed to important new avenues for their management.
Chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) is a fatiguing illness characterised by unexplained, persistent long-term disabling fatigue plus additional signs and symptoms, including neurophysiological symptoms.65 Brain imaging studies have shown that CFS/ME patients are dysfunctional in their ventral anterior cingulate cortex, and they also have other brain MRI abnormalities.66, 67 In addition, CFS/ME patients also have immunological and inflammation abnormalities, such as alternations in natural killer cell function68, 69 and cytokine profiles.70, 71 In addition, the hypothalamo-pituitary-adrenal axis, which plays a major role in stress responses, appears to be altered in CFS/ME.72
Most, if not all, CFS/ME patients have multiple chronic bacterial and viral infections.73-80 For example, when patients were examined for evidence of multiple, systemic bacterial and viral infections, the Odds Ratio for this was found to be 18 (CI 95% 8.5-37.9, p< 0.001).75 In this study CFS/ME patients had a high prevalence of one of four Mycoplasma species (Odds Ratio=13.8, CI 95% 5.8-32.9, p< 0.001) and often showed evidence of co-infections with different Mycoplasma species, C. pneumoniae (Odds Ratio=8.6, CI 95% 1.0-71.1, p< 0.01) and HHV-6 (Odds Ratio=4.5, CI 95% 2.0-10.2, p< 0.001).75 In a separate study the presence of these infections was also related to the number and severity of signs and symptoms in CFS/ME patients, including neurological symptoms.77 Similarly, Vojdani et al.76 found Mycoplasma species in a majority of CFS/ME patients, but this has not been seen in all studies.81 Interestingly, when European CFS/ME patients were examined for various Mycoplasma species, the most common species found was M. hominis,82 whereas in North America the most common species found was M. pneumoniae,75, 77 indicating possible regional differences in the types of infections in CFS/ME patients. In addition to Mycoplasma species, CFS/ME patients are also often infected with B. burgdorferi,80 and as mentioned above, C. pneumoniae.75, 77, 83
Other infections are also found in CFS/ME patients, such as viral infections: CMV,84 parvovirus B19,78 enterovirus79 and HHV-6.75, 77, 85-88 For example, Ablashi et al.88 found that 54% of CFS/ME patients had antibodies against HHV-6 early protein, compared to 8% of controls. Similarly, Patnaik et al.86 found that 77% of CFS/ME patients were positive for HHV-6 early antigen IgG or IgM antibodies, whereas only 12% of control subjects had IgG or IgM antibodies to HHV-6 early antigen. Recently a new retrovirus, XMRV, was found in mononuclear blood cells of 67% of 101 chronic fatigue syndrome patients compared to only 3.7% of healthy controls. Cell culture experiments determined that the patient-derived virus was infectious and could possibly be transmitted.89
Gulf War illnesses
GWI is a syndrome similar to CFS/ME.90 In most GWI patients the variable incubation time, ranging from months to years after presumed exposure, the cyclic nature of the relapsing fevers and the other chronic signs and symptoms, and their subsequent appearance in immediate family members, are consistent with an infectious process.90, 91 GWI patients were exposed to a variety of toxic materials including chemicals, radiochemicals and biologicals so not all patients are likely to have infections as their main clinical problem. Neurological symptoms are common in GWI cases.90 Baumzweiger and Grove92 have described GWI as neuro-immune disorder that involves the central, peripheral and autonomic nervous systems as well as the immune system. They attribute a major source of the illness to brainstem damage and central, peripheral and cranial nerve dysfunction from demyelination. They found GWI patients have muscle spasms, memory and attention deficits, ataxia and increased muscle tone.92
Bacterial infections were a common finding in many GWI patients.90 Mycoplasmal infections were found in about one-half of GWI patients, and more than 80% of these cases were PCR positive for M. fermentans.90, 91, 93-95 In studies of over 1,500 U.S. and British veterans with GWI, approximately 45% of GWI patients have PCR evidence of such infections, compared to 6% in the non-deployed, healthy population. Other infections found in GWI cases at much lower incidence were Y. pestis, Coxiella burnetii and Brucella species.90
When we examined the immediate family members of veterans with GWI who became sick only after the veteran returned to the home, we found that >53% had positive tests for mycoplasmal infections and showed symptoms of CFS/ME. Among the CFS/ME-symptomatic family members, most (>80%) had the same Mycoplasma fermentans infection as the GWI patients compared to the few non-symptomatic family members who had similar infections (Odds Ratio=16.9, CI 95% 6.0-47.6, p<0.001).91 In contrast, in the few non-symptomatic family members that tested Mycoplasma-positive, the Mycoplasma species were often different from the species found in the Gulf War Illness patients (M. fermentans).The most sensible conclusion is that veterans came home with M. fermentans infections and then transmitted these infections to immediate family members.91
Some other infectious diseases with neurological aspects
Lyme Disease
Lyme disease is caused by a tick bite and the entry of the spiral-shaped spirochete B. burgdorferi as well as other co-infections.96 Lyme disease is the most common tick-borne disease in North America. After incubation for a few days to a month, the Borrelia spirochete and co-infections migrate through the subcutaneous tissues into the lymph and blood where they can travel to near and distant host sites, including the central nervous system.3, 97-99 Transplacental transmission of B. burgdorferi and co-infections can occur in pregnant animals, including humans, and blood-borne transmission to humans by blood transfusion is likely but unproven. The tick-borne co-infections associated with Lyme disease can and usually do appear clinically at the same time, complicating clinical dignoses.100
Lyme disease signs and symptoms eventually overlap with the signs and symptoms of other chronic illnesses, and patients are often diagnosed with illnesses like CFS/ME, chronic arthritis or a neurological disease.80,97-100 About one-third of cases with Lyme disease start with the appearance of a round, red, bulls-eye skin rash (erythema migrans) at the site of the tick bite, usually within 3-30 days.100 Within days to weeks mild flu-like symptoms can occur that include shaking chills, intermittent fevers and local lymph node swelling. After this localised phase, which can last weeks to months, the infection can spread to other sites resulting in disseminated disease. In the disseminated (late) phase patients present with malaise, fatigue, fever and chills, headaches, stiff neck, facial nerve palsies (Bell’s palsy) and muscle and joint pain, and other signs and symptoms.100-104
The disseminated (late) phase of Lyme disease is a chronic, persistent disease with ophthalmic, cardiac, musculoskeletal, central nervous system and internal organ invasion. When it involves the central and peripheral nervous systems, it is often termed neuroborreliosis.100, 104 At this late stage, arthritis, neurological impairment with memory and cognitive loss, cardiac problems (such as myocarditis, endocarditis causing palpitations, pain, bradycardia, hypertension) and severe chronic fatigue are usually apparent.80, 100-102 The signs and symptoms of the chronic (late) phase of the disease usually overlap with other chronic conditions, such as CFS/ME, chronic arthritis, as well as neurodegenerative diseases, causing confusion in the diagnosis and treatment of the chronic phase in patients with Lyme Disease.80, 97, 100, 105 Patients with late stage neuroborreliosis exhibit neuropathologic and neuropsychiatric disease similar to some of the neurodegenerative diseases discussed in previous sections.1
Diagnostic laboratory testing for Lyme disease at various clinical stages is not fool-proof, and experts often use a checklist of signs and symptoms and potential exposures, along with multiple laboratory tests to diagnose Lyme disease.104 The laboratory tests include serology, Western blot analysis of B.burgdorferi associated bands, PCR analysis of blood and the nonspecific decrease in CD-57 natural killer cells. Unfortunately, similar to other intracellular bacteria, Borrelia spirochetes are not always released into the blood circulation or other body fluids, making the very sensitive PCR method less than reliable for diagnosing Lyme Borrelia with blood samples. Lebech and Hansen106 found that only 40% of cerebrospinal fluid samples from patients with Lyme neuroborreliosis were positive for B.burgdorferi by PCR.
Co-infections in Lyme disease are important but, in general, have not received the attention that B. burgdorferiattracts. Some of the Lyme Disease co-infections on their own, such as M. fermentans, have been shown to produce signs and symptoms comparable to B. burgdorferi infections.80, 102
The most common co-infections found in Lyme disease are species of Mycoplasma, mostly M. fermentans, present in a majority of cases.80, 103, 107 In some cases multiple mycoplasmal infections are present in patients with Lyme disease,80 while other common co-infections include Ehrlichia species, Bartonella species and Babesia species. Such co-infections are present in 10-40% of cases.103, 104, 108-112Ehrlichia and Bartonella species are usually found along with Mycoplasma species in Lyme disease.94, 98, 108-111 Bartonella species, such as B.henselae,111 which also causes cat-scratch disease,113 are often found in neurological cases of Lyme disease.100, 111
Protozoan co-infections have been found with B.burgdorferi, such as intracellular Babesia species.100, 108, 109, 112, 114 The combination of Borrelia, Mycoplasma and Babesia infections can be lethal in some patients, and ~7% of patients can have disseminated intravascular coagulation, acute respiratory distress syndrome and heart failure.109
Brucellosis
Brucellosis is a nonspecific clinical condition characterized by intracellular Brucella species infection.115 Approximately 40% of patients with Brucella spp. infections have a systemic, multi-organ chronic form of brucellosis that is similar to CFS/ME in its multi-organ signs and symptoms.115, 116Brucella infections can invade the central nervous system and cause neurological symptoms.117
Brucella species cause infections in animals, and often humans get the infections from prolonged contact with infected animals. Thus these bacteria are zoonotic, they are capable of being transmitted from animals to humans. Although there are at least eight species of Brucella that are pathogenic, only B. melitensis, B. abortus, B. suis and B. canis have been reported to be pathogenic in humans.116
When CFS/ME patients were examined for the presence of Brucella spp. infections, approximately 10% showed evidence by PCR of Brucella spp. infections (Odds Ratio=8.2, CI 95% 1-66, p<0.01).118 Interestingly, urban CFS/ME patients with Brucella infections were not as prevalent as rural patients with Brucella infections (Odds Ratio=5.5, CI 95% 3-23.5, p<0.02), while control subjects had very low (1.4%) rates of infection. Co-infections with Mycoplasma species were also found in Brucella-positive CFS/ME patients.118
Final comments to part 2
The progression, and in some cases, the inception of many chronic diseases are probably elicited by various bacterial and viral infections.1, 39, 40, 119 Even if infections are not directly involved in the pathogenesis of these diseases, patients with chronic conditions are at risk of a variety of opportunistic infections that could result in co-morbid conditions or promote disease progression. Infections can complicate diagnosis and treatment, and patients with late-stage disease with complex neurological manifestations, such as meningitis, encephalitis, peripheral neuropathy, psychiatric conditions, or with other signs and symptoms could have infections that are not recognized or treated.
Patients with chronic diseases are particularly difficult to treat using single modality approaches, and this is particularly true for patients who also have multiple chronic infections.103, 109 The multi-focal nature of chronic diseases and the fact that often treatments are given to suppress signs and symptoms, rather than treat causes of the disease or its progression, have resulted in incomplete or ineffective treatments. On the other hand, even if the causes of chronic diseases are known, by the time therapeutic intervention is undertaken, it may be entirely too late to use approaches that should work on the disease if chronic infections were not present. Moreover, if complex, chronic infections are ignored or left untreated, recovery may be difficult, if not impossible to achieve.
At the moment the evidence that particular or specific types of infections are responsible for the inception or pathogenesis of chronic diseases is inconclusive.119 One of the problems that arises in trying to prove this hypothesis is that not all patients appear to have similar chronic infections. Some individuals can harbour chronic infections without any observable signs or symptoms. Although the incidence of chronic infections of the types discussed in this review in symptom-free individuals is generally very low, usually only a few percent,74-76, 120 that does not prove that they are important in pathogenesis. Since patients with chronic diseases have been identified that do not have easily diagnosed chronic infections, most researchers have concluded that infections are not involved in the pathogenesis of chronic diseases. Unfortunately, the tools available to find chronic infections are not optimal, and many patients are likely go undiagnosed with chronic infections for purely technical reasons.1,119-121
In the history of medicine animal models of disease have provided useful information that could not be obtained through clinical studies alone. Indeed, the field of chronic diseases could benefit from the greater use of relevant animal models. We suggest that to be useful, the pathogenesis of the animal models of disease must be similar to the pathogenesis of human disease and the animal models must have a similar response to therapy as humans. Thus such models are only relevant if they closely mimic human disease and its response to treatment. For example, the infection of non-human primates with neuropathologic microorganisms, such as Mycoplasma fermentans, resulted in brain infections and fatal diseases with clinically typical neurological signs and symptoms.122 These primates also respond to therapies that have been used successfully to treat humans.93,123 Thus this particular model may be useful if it can be reproucibly infected with specific microorganisms and later develop neurological signs and symptoms that closely mimic chronic human neurological diseases. Future efforts to determine the relationship between specific infections and the pathogenesis of various chronic diseases may well depend on the further development of relevant animal models.
Browse the September 2010 PDF Booklet (Volume 3 Number 3)