Grapefruit (Citrus paradise) is thought to have originated as a cross between the Jamaican sweet orange (Citrus sinensis) and the Indonesian pomelo (Citrus maxima) fruit. It was first bred in Barbados and brought to Florida in 1820s. Subsequently, different mutant and hybrid varieties were developed. Although the white and pink varieties were being popularly consumed, the ruby red variety has become very popular and commercially successful.
The taste is a mixture of the sweetness and tanginess of an orange and the sourness of a citrus fruit. It is a rich source of vitamins C and potassium. It is found to have antioxidant properties due to the presence of lycopene1and an ability to inhibit atherosclerosis due to the presence of pectin2,3. The seed extract is thought to have antimicrobial and antifungal properties.
With good publicity, marketing and coverage by health magazines, grapefruit juice has gained widespread use and in most Western Europe and America it is one of the common fruit juices consumed at breakfast. In the United Kingdom, in terms of fruit juice sales, it ranks second among citrus fruits and the fourth overall.4
Pharmacokinetic effects
The interaction of grapefruit juice with medication was first reported by Bailey et al in 1991 after their accidental discovery of up to four fold increase in the blood levels of filodipine when taken with grapefruit juice5. Further studies have identified similar interactions with more than 85 drugs6. The half life of the effects of grapefruit juice is estimated to be around 12 hours7, but these effects may last from 4 hours to 24 hours8. The effects are more pronounced with regular consumption of grapefruit juice prior to ingestion of the drug and there can be a cumulative increase in drug concentrations with continued grapefruit juice intake7,9.
As little as 200-250ml may be sufficient to induce its effects7,10. Some of the interactions involve medication that have narrow therapeutic window and can therefore cause potent adverse effects such as torsade de pointes, rhabdomyolysis, myelotoxicity, respiratory depression gastrointestinal bleeding, nephrotoxicity and sudden cardiac death.6 There is a lack of awareness among both doctors and patients about its effects and interaction with various medications.
Mechanism of action
These pharmacokinetic interactions with grapefruit juice are more observable in drugs with high first pass metabolism and in those with an innate low oral bioavailability. The oral bioavailability of affected drugs is increased but their half life usually remains unaltered11,12. Grapefruit juice is associated with the inhibition of Cytochrome P450 enzyme system, particularly the CYP3A4 enzyme7. The CYP3A4 enzyme is present both in the liver and intestinal mucosa. Once the drug is taken up by the mucosa, the susceptible drug may be metabolised by the CYP3A4 or pumped back into the intestine lumen by P-glycoprotein. The observed effects of grapefruit juice are thought to be mainly due to the inhibition of intestinal CYP3A4 activity, which leads to decreased first pass metabolism and hence increased bioavailability. This inhibitory action is fairly quick and may be due to the rapid degradation of the enzyme or any decrease in its production from the mRNA. The production of mRNA itself from DNA is not thought to be affected13. The susceptibility varies between individuals depending upon their genetic expression of CYP3A4, the effects being more prominent in those with high small intestinal CYP3A4 content7, 14.
The effect of grapefruit juice on the P-glycoprotein is unclear. The activation of P-glycoprotein pumps the drug back into the intestinal lumen which should reduce bioavailability and similarly the inhibition of P-glycoprotein increases the bioavailability. Some studies suggest that the inhibition of P-glycoprotein is the mechanism responsible for the increased bioavailability of certain drugs like cyclosporine15,16.
The active ingredients responsible for interactions of grapefruit juice with medication are not clearly identified. The compounds exerting this action are thought to be either the flavanoids such as naringin and naringinen17,18,19,20 or the furanocoumarins such as bergamottin and its derivatives21,22,23,24, but there is no clear consensus.
Drug interactions
Table 1 below lists some of the commonly used drugs whose bioavailability is affected by grapefruit juice. Although the best known interactions have been mentioned in the table, there are many other drugs like carvedilol, estrogens, itraconazole, losartan and methyl prednisolone whose bioavailibility is increased by grapefruit juice and the adverse effects are not yet clear.
Table 1: Potential risk of drug interactions with grapefruit juice6, 7,13,25,26,27
Risk of Interaction
Group
Drug
V High
High
Mod
Anaesthetic
Ketamine
+++
Anaesthetic
Alfentanil
++
Fentanyl
++
Antiarrhythmic
Dronedorone
+++
Amiodorone
++
Quinidine
+
Anti-Cancer
Dasatanib
++
Everolimus
++
Nilotinib
++
Pazopanib
++
Sunitinib
++
Vanetanib
++
Antidepressants
Buspirone
++
Sertraline
+
Clomipramine
+
Antiemetic
Domperidone
+++
Antiepileptics
Carbamazapine
++
Anti-HIV
Maraviroc
+++
Ripivirine
++
Anti-infective
Erythromycin
++
Quinine
++
Primaquine
++
Antiplatelet
Clopidogrel
++
Antipsychotics
Pimozide
++
Quetiapine
++
Ziprasidone
++
Benzodiazepines
Midazolam
+
Diazepam
+
Triazolam
+
Ca-channel bolckers
Felodipine
+
Nifedipine
+
Immunosuppressants
Cyclosporin
++
Tacrolimus
++
Sirolimus
++
Opiods
Oxycodone
++
Methadone
++
Statins
Simvastatin
+++
Atorvastatin
++
Urinary Tract
Solifenacin
+
Fesoterodine
+
Darifenacin
+
Tamsulosin
+
Implications on clinical practice
Clinicians should make themselves aware and educate their patients of these potential interactions, keeping in mind the individual variations in susceptibility. This may be particularly important for medications that have a very narrow therapeutic window, medication that have an innate low oral bioavailability and a high first pass metabolism mainly via CYP3A4.
A patient may develop exceptional beneficial effects or equally, significant adverse effects should he start consuming grapefruit juice mid- treatment. Conversely, a drop in efficacy of a drug is also possible, should a patient using grapefruit juice on a regular basis, stops it suddenly.
To achieve steady concentration of the medication and avoid such potential effects, it may be best to advise patients to avoid consuming grapefruit juice if there is a potential of interaction. The half life of the effect of grapefruit juice appears to be around 12 hours and therefore, it is advisable to discontinue grapefruit juice 72 hours prior to starting any drug with potential interactions. 8,9
Due to the prolonged effect of CYP3A4 inhibition which may last up to 24 hours, it is not possible to avoid these interactions by separating the times of drug and grapefruit juice consumption.8,9
There is more research needed to clarify the mechanism of action and to determine the active ingredients. The identification of the active ingredient can allow oral administration of drugs that undergo CYP3A4 mediated high first pass metabolism because of which currently, they can only be given systemically.
In light of the possible increase in bioavailability of specific drugs, although it might be possible for patients to use this to their advantage in reducing the dose of their medication under medical supervision, it is perhaps too early to recommend the use of grapefruit juice as an adjunctive or augmentation strategy.
Post- graduate medical education in the United Kingdom has seen numerous dramatic changes in the last decade, with the introduction of structured training programmes and changes in assessment of skills driven by Modernising Medical Careers.1 Overall these new developments emphasise a competency based curriculum and assessments. Alongside and contingent on these wider changes in medical education, psychiatric trainees have faced major transformations in their membership (MRCPsych) examinations.
The MRCPsych examination was first introduced in 1972, a year after the Royal College of Psychiatrists was founded. There have been various modifications in its structure since its inception but a radical change occurred in the last decade with the introduction of an OSCE in 2003 and the CASC, a modified OSCE in June 2008. The CASC is considered as a high- stakes examination as it is now the only clinical and final examination towards obtaining the membership of the College. The MRCPsych qualification is considered as an indicator of achieving professional competence in the clinical practice of psychiatry and has the main aim of setting a standard that determines whether trainees are suitable to progress to higher specialist training.2 In his commentary to Wallace et al3 , Professor Oyebode describes the aims, advantages and disadvantages of the various assessment methods used in the MRCPsych examination and conclude that the precise assessment of clinical competence is essential.4
Traditionally, assessment of clinical skills involved a long case examination since it was introduced in clinical graduating examination by Professor Sir George Paget at Cambridge, UK in 1842. This has been followed by most of the medical institutions worldwide and remained as the clinical component of the MRCPsych examination until 2003. There are some shortcomings with this assessment method and the outcome can be influenced by several factors such as varying difficulty of the cases, co-operation of the real patient and examiner- related factors. The reliability of assessment of clinical competency with a single long case is low and it is necessary for the candidate to interview at least ten long cases to attain the reliability required for a high stakes examination like MRCPsych.5 A fair, reliable and valid examination is necessary to overcome these difficulties. The OSCEs proved to be one of the answers to these difficulties.
One important aspect of assessing the validity and acceptability of assessment methods is asking the opinions of examiners and candidates about their experiences and views about the examination once it has been rolled out. As far as the authors are aware there has been one previous published survey of CASC candidates’ views on this method of examination and this was based at a revision course. Whelan et al6 showed that approximately 70% of the candidates did not agree with the statement “there is no longer a need to use real patients in post-graduate clinical psychiatry exams”. In addition, only 50% of the candidates preferred the CASC compared to previous long case and the other 50% remained undecided. This raises doubts about the acceptability of the CASC format and merits further exploration.
Method
We conducted a national on-line survey asking both candidates and examiners about their views on the CASC examination.
Questionnaire development
Two questionnaires (one each for examiners and candidates) based on previously available evidence on this exam format6,7,8 were developed following discussions among the authors.
The final version of the questionnaire for both groups had the same seven questions with a five point Likert scale. It included questions on whether the exam effectively assessed the competency needed for real life practice, whether there was over testing of communication skills, whether feedback was adequate, respondents’ views on validity and reliability of the method and finally whether the clinical examination should revert to the previous style of long case and viva.
Sampling procedure
The examiners and the candidates who have already appeared in the CASC examination were invited to complete the online survey. The links to the questionnaires were distributed via the Schools of Psychiatry in thirteen deaneries in the United Kingdom (including Wales, Northern Ireland and Scotland). We approached 400 candidates and 100 examiners from different deaneries making sure the wide geographical distribution. The sample size was chosen based on the data that around 500 candidates appear in CASC exam each time and there are approximately 431 examiners on CASC board (personal contact with the College).Participants were assured that their responses were confidential. The survey was open from mid-March to mid-April 2011. Reminders were sent half way through the survey period.
Results
A total of 110 candidates and 22 examiners completed the survey. The response rate was better for candidates (27.5%) compared to the examiners (22%). Albeit the low response rate, the responses showed good geographical spread. Responses were received from most of the deaneries (87%). The London, East and West Midlands deaneries showed higher response rate (14% each) while Scotland, Severn and North Western deaneries showed least response rate (2% each).
Among the 110 candidates, 52% were males and 48% were females and among the examiners, 73% were males and 27% were females. 55% of the examiners were involved in the previous Part 2 clinical exam while only 7% of the candidates had the experience of previous Part 2 clinical exam. The results are summarised in Tables 1 and 2.
Table 1. Candidates’ views ( n= 110 )
Survey questions
Strongly agree
Agree
Neutral
Disagree
Strongly disagree
CASC examines the required competencies to progress to higher training
10%
38%
7%
26%
19%
CASC examines all skills and competencies compared to previous Part 2 clinical exam
4%
11%
46%
21%
18%
CASC scenarios reflects the real life situations faced in clinical practice
12%
36%
13%
22%
17%
CASC gives more emphasis on testing communication and interviewing skills than overall competencies
29%
31%
14%
19%
7%
CASC is more valid and reliable as a clinical exam
9%
19%
29%
20%
23%
Feedback system ‘areas of concern’ are helpful to the unsuccessful candidates
1%
11%
28%
26%
34%
CASC needs to be replaced by traditional style of exam – a long case and a viva
14%
22%
25%
24%
15%
Table 2. Examiners’ views ( n= 22 )
Survey questions
Strongly agree
Agree
Neutral
Disagree
Strongly disagree
CASC examines the required competencies to progress to higher training
14%
45%
14%
18%
9%
CASC examines all skills and competencies compared to previous Part 2 clinical exam
4%
14%
23%
45%
14%
CASC scenarios reflects the real life situations faced in clinical practice
14%
63%
5%
9%
9%
CASC gives more emphasis on testing communication and interviewing skills than overall competencies
22%
26%
17%
22%
13%
CASC is more valid and reliable as a clinical exam
9%
37%
27%
9%
18%
Feedback system ‘areas of concern’ are helpful to the unsuccessful candidates
0%
36%
14%
27%
23%
CASC needs to be replaced by traditional style of exam – a long case and a viva
18%
14%
41%
9%
18%
Clinical competencies and skills
59% of the examiners and 48% of the candidates have accepted that CASC examines the required competencies to progress to higher training. Strikingly only 18% of the examiners and 15% of the candidates agreed that CASC allows the assessment of all the skills and competencies necessary for the higher trainees in comparison to the previous Part 2 clinical exam.
Content of the CASC
Majority of the examiners (77%) and nearly half of the candidates (48%) agreed that CASC scenarios reflect real life situations faced by clinicians in normal practice. However 60% of the candidates and 48% of the examiners felt that CASC excessively emphasizes communication and interview skills.
Feedback - “areas of concerns”
More than half of the candidates (60%) and half of the examiners (50%) felt that the feedback indicating “areas of concerns”, for the failed candidates was not helpful to improve their preparations before the next attempt.
Validity and reliability of the CASC as a clinical exam
Just over one fourth of the candidates (28%) and less than half of examiners (46%) considered CASC as a valid and reliable method of clinical examination. However, only 36% of the candidates and 32% of the examiners supported replacing CASC with a traditional clinical exam (a long case and a viva). Broadly comparable numbers (39% of the candidates and 27% of the examiners) disagreed with the statement that the CASC should be replaced by the previous examination style.
Discussion
To our knowledge this is the first study of candidate and examiner views since the introduction of the CASC. Its predecessor OSCEs has a good reliability and validity in assessing medical students8 and it has become a standard assessment method in undergraduate examinations. Whilst OSCEs have been held to be reliable and valid in a number of assessment scenarios,8 there have been doubts about their ability to assess advanced psychiatric skills,9 which was one of the main reasons to retain the long case in MRCPsych Part 2 clinical exam.2 Over the years, most of the Royal Colleges introduced OSCEs into their membership examinations and used simulated patients in some scenarios. However CASC is the first examination with only simulated patients in a combination of paired and unpaired stations. So far there has been no published literature evaluating this method systematically.
In a recent debate paper10 it has been argued that CASC may have significant problems related to its authenticity, validity and acceptability. The findings of our survey reflect similar doubts about the reliability and validity of the CASC exam amongst both the candidates and examiners. The content validity of CASC has been demonstrated by the College Blueprint11 and the face validity appears to be good. However, as far as we are aware, the concurrent and predictive validity testing data have not been published. Although the global marking system appears to have better concurrent validity than other checklists, it gives the examiners the similar flexibility as the long case in making judgements which may affect CASC transparency and fairness. This may indicate that this new and promising examination method requires further systematic evaluations and modifications before its user’s fully accept it.
According to the results of our study the content of the CASC exam satisfies its purpose of assessing the candidates’ competencies to progress to the higher professional training. However many of the respondents felt that it lacked the completeness of previous traditional clinical examination, which collate skills. Although there were some differences between the candidates and the examiners on how they perceived the CASC exam, most of the respondents agreed that CASC laid more emphasis on communication and interviewing skills rather than overall assessment of the candidate’s competency.
Harden et al,12 in their paper on OSCEs, criticised the compartmentalisation of knowledge and discouraging candidates from a broader thinking during the clinical examinations. They also suggested using a long case and/or workplace based assessments rather than relying on OSCEs only in assessing trainees. Benning & Broadhurst13 expressed similar concerns on the loss of long case in MRCPsych examination. Our findings support the arguments that CASC assesses competencies in a piecemeal fashion rather than being reflective of the demands on senior doctors in real practice which often involve deciding what is and is not important depending on context.
The OSLER14 (Objective Structured Long Examination Record) method might overcome the shortcomings and improve the objectivity and transparency of long case. In this method, two examiners assess the candidate and grade their skills individually in a ten item objective record. Later they decide together the appropriate grade for each item and agree an overall grade. The ten items include four on history, three on examination and another three covering investigations, management and clinical acumen. The OSLER method is also practical as no extra assessment time is required and it can be used for both norm referenced & criterion referenced exams. The case difficulty can be determined by the examiners and all candidates are assessed for identical items. Thus this method assesses the candidate’s overall clinical competency and eliminates the subjectivity associated with the long case.
Another alternative might be using a combination of assessment methods as suggested by Harden.12 An 8-10 stations OSCE can be combined with a long case assessment using OSLER method. The OSCE stations might include patient management scenarios along with interview and communication skills scenarios. The final score determining the result could also include marks from work place based assessments as they provide a clear indication of the candidate’s skills and competence in real life situation.
It is also evident from our findings that both candidates and examiners are largely unsatisfied with the extent and usefulness of feedback that is provided to unsuccessful candidates. The feedback system have been criticised for its inability to clarify the specific areas or skills which need to be improved by the unsuccessful candidates. The recent “MRCPsych Cumulative Results Report’’ 15 states that the pass rate of the candidates declines after the first attempt. Perhaps this could be improved if failed candidates receive more detailed feedback about their performance.
There are a number of limitations to this study. The response rate was low but it was broadly in the range of other online surveys16 and there was representation from most of the deaneries in the United Kingdom. There could be a number of reasons for low response rate. As far as we are aware few deaneries were not willing to distribute the questionnaire through their School of Psychiatry and we had to contact the individual trusts in the area to distribute the survey. The poor response rate from the examiners could be because of their low interests in participating and lack of time. Also older examiners and those with more experience of CASC may have had particular views which might have had an influence on the responses. But when this was examined further, there were no major differences between respondents who had the experience of previous Part 2 examinations from those who had not. In addition one of the survey questions consisted of two parts (views on validity and reliability) which could have been difficult to answer accurately.
The findings of this preliminary study raise some doubts on acceptability of the CASC by both candidates and examiners. There might be a possibility of subjective bias in the responders’ views, perhaps influenced by other ongoing and controversial changes in the NHS, including the role of GMC and the College in the post- graduate medical education. However on the other hand it might be a signal that it is worthwhile to reconsider the implications of the CASC on education and training and to evaluate systematically this assessment method further.
The incidence and prevalence of hypertension is rife among first and second world countries and arguably, could be labelled the most common chronic disease in the UK1. It is estimated that a quarter of all adults in the UK have hypertension2. This is alarming in light of the incredible contribution of hypertension to mortality and morbidity. Evidence shows that for each 2mmHg rise of systolic blood pressure, mortality from ischaemic cardiac events rises by 7% and mortality from ischaemic intracranial events rises by 10%3-4. The key to reversing this lies in diagnosing hypertension accurately and quickly and knowing how to best treat each patient.
Understanding hypertension
It is still not entirely clear what mechanisms cause hypertension. Evidence has proven that systolic blood pressure increases in a linear fashion with age, which is due to the loss of elastic tissue in arteries with age1,5. This is known as ‘essential hypertension’. The majority of patients who are diagnosed with hypertension have ‘essential hypertension’, in other words, there is no clear cause found besides increasing age1-2. Research has shown that other factors can be associated with hypertension but alone may not necessarily cause it. Unalterable risk factors include genetic predisposition, age, sex and race. Other factors which have proven to raise blood pressure include environmental factors such as lifestyle and diet, obesity (randomised control trials have shown that a weight loss of one kilogram can be attributed to one mmHg fall in diastolic blood pressure5), excessive alcohol intake, smoking, stress at work or in the home, socio-economic status and recent major life events1-2,6.
How to take the perfect blood pressure
In the clinic setting, make sure the patient is relaxed, sit the patient comfortably with their arm outstretched, resting on the table and wait a few minutes. Make sure the sleeves are not too tight as this will alter the readings1,5. Be careful that the blood pressure cuff is the correct size for the patient as small cuffs can give a false high reading for larger patients and larger cuffs can give a false low reading for smaller patients3,5. Check the pulse is regular as irregular pulses can give incorrect readings from automated devices - if in doubt perform it manually5. Take the blood pressure in both arms and repeat several times, discard the first reading and always record the lowest reading in the patient’s file1.
The well recognised ‘white coat syndrome’ has a prevalence of about 10% in the UK and according to NICE data, the syndrome can cause a difference of 20/10mmHg between readings in a clinical setting and those at home1,3,5. In light of this, NICE altered the guidelines in 2011, stating that any patient with a reading close to 140/90mmHg is to be sent home with an Ambulatory Blood Pressure Monitor (ABPM)3. This is a device attached to the patient for a minimum of 24 hours and it records the patient’s blood pressure every 30 minutes of the patient’s waking day. The idea behind this is to rule out any ‘white coat syndrome’ and to get a range of readings as the patient goes about their usual day to day activities. In this way, when the readings are analysed (an average of at least 14 readings are taken by the clinician), it confirms diagnosis immediately and treatment can be initiated1,3.
The value of repeated measurements in different settings has been shown in evidence from as early as the 1970s2,5. Research has also shown that patients usually have a high blood pressure reading initially which drops after subsequent measurements, hence the new guidelines are in place to allow for a range of readings before diagnosing and treating hypertension1,3.
Investigations
While the patient is still in the clinic, assess the patient’s overall cardiovascular risk score using the cardiovascular risk assessment tool3. Perform a thorough physical examination, including looking for evidence of target organ damage, for example, left ventricular hypertrophy, renal disease, peripheral vascular disease and changes in the retina from raised blood pressure1,3,5. If there is suspicion of hypertension, send the patient home on an ABPM3. While waiting for the results of the ABPM, any patient under investigation for hypertension needs to have a baseline set of tests1,7-8. This includes a full blood count, renal function tests, liver function tests, a fasting glucose and cholesterol blood test, an ECG (electrocardiogram) and a urine dip. These tests are a basic screen for assessment of target organ damage6,9. If these investigations are not adequate, a patient can be referred for more extensive investigation for target organ damage, for example, an echocardiogram or a renal ultrasound/angiography1,3,5.
Any patient that is young or presenting with persistent hypertension, especially that which does not respond to treatment, needs further investigation for other causes, such as renal disease, adrenal disease, alcoholism or steroid use (not to forget the oral contraceptive pill can also cause hypertension1,7,9.
In general, a patient should be treated if their blood pressure readings are persistently 140/90mmHg or higher. For those that have borderline readings, for example, 135/85mmHg, clinicians must assess their cardiovascular risk score and look for target organ damage. If there is evidence for either of these, a patient should be started on treatment immediately1.
Treatment
Non-pharmacological
First line treatment of hypertension is always non-pharmaceutical; also known as ‘lifestyle changes’ 1,3,5,9. Attempt to find out the details of the patient’s diet, weight, employment, stress levels at work/home, exercise, alcohol intake and smoking habits. Once established, assist the patient in altering their lifestyle choices in order to lower their blood pressure. Patients often feel overwhelmed and many benefit from group activities, such as smoking cessation and weight loss groups3. Other ideas include a dietician referral, counsellors if they are struggling with motivation and low moods, gym sessions/personal trainers. Encourage the patient in that if they succeed in altering their lifestyle and therefore bringing down their blood pressure, they can avoid prescription medication.
Lifestyle changes can delay hypertension for many years but if the blood pressure continues to creep upwards in subsequent multiple visits and lifestyle options have been exhausted, it would then be appropriate to start pharmacological management1,3,5.
Pharmacological Treatment
In general terms, always start with monotherapy and increase the dose according to patient response. According to NICE guidelines from 2011, if a patient is over 55 years of age and/or Afro-Caribbean in origin, start with a calcium channel blocker, such as Amlodipine3. If these are contra-indicated, start with a thiazide diuretic3. In regards to thiazide diuretics, the new NICE guidelines state that Chlortalidone (12.5–25.0 mg once daily) or Indapamide (1.5 mg modified-release or 2.5 mg once daily) should be used in preference to what clinicians have been prescribing for years, namely Bendroflumethiazide and Hydrochlorothiazide3. For those who are already on these conventional thiazide diuretics, NICE state that if the patient’s blood pressure is stable, to continue with Bendroflumethiazide or Hydrochlorothiazide3.
Patients diagnosed with hypertension who are under 55 years of age, should be started on an ACE inhibitor (Angiotensin Converting Enzyme inhibitor), for example, Ramipril3, but if this is not tolerated, replace it with an ARB (Angiotensin II Receptor Blocker) such as Losartan3.
Review the patient every few weeks initially and extend the reviews to 6 months once the blood pressure is within therapeutic range6.Do not forget to check the patient’s renal function in the first few months of starting a new drug and always be aware that if a patient’s blood pressure drops drastically after starting an ACE inhibitor this suggests underlying renal disease and must be investigated1,3,5,9.
Continue to titrate the dose of the drug until the patient’s blood pressure is satisfactory. Consider adding in a second agent when the patient is nearing maximum dose of the first agent and the blood pressure is rising again3,5. Depending on what the patient is on, add in either an ACE inhibitor, ARB or a calcium channel blocker, for example, if patient is on Ramipril, add in Amlodipine and vice versa3. If a calcium channel blocker is not tolerated as second line, consider using thiazide diuretics. Afro-Caribbean patients who are already on calcium channel blockers, add in an ARB, rather than an ACE inhibitor3.
Following that, if the blood pressure is still not within therapeutic range, consider adding in a third agent or alternatively, discontinue the first agent and continue with the second and add a third from among an ACE inhibitor, ARB or calcium channel blocker. Consider a thiazide diuretic if patients are intolerable to any of the above3.
Beta-blockers should not be considered in treating hypertension, according to NICE, unless the patients are very young or intolerant to ACE inhibitors, ARBs or calcium channel blockers3.
Beware and monitor closely any elderly patients who are on antihypertensives as the physiology of ageing interferes with the drugs, for example, decreased clearance of drugs from the kidney or liver, decreased sensitivity to baroreceptors (postural hypotension), chronic sodium retention and reduced cardiac reserve. Do not forget communication and compliance issues with the elderly also1,7.
Make sure there is an annual review for each patient that has been diagnosed with hypertension in order to get blood pressure readings, medication review and how the patient is coping with lifestyle changes or side effects of the antihypertensives.
When to Refer
Resistant hypertension is defined as a patient remaining hypertensive despite being on triple or quadruple drug therapy3,7. Consider starting a low dose of Spironolactone (if the serum potassium is less than 4.5mmol/l) and refer to a specialist for advice3-4.
If subsequent readings are 180/110mmHg or more, start antihypertensives immediately and refer the patient to hospital. Also refer immediately if retinal haemorrhages or papilloedema are seen1,3.
Summary
If a patient is suspected to have hypertension, send them home with an ABPM and perform baseline tests1,3.
Start treatment if blood pressure is 140/90mmHg and ABPM average is 135/85mmHg and/or patient has one of the following:
target organ damage
established cardiovascular disease
renal disease
diabetes
10-year cardiovascular risk equivalent to 20% or greater (NICE guidelines, 2011)3
Start on monotherapy and review every few months, until blood pressure is stable3.
Review yearly after stability has been reached and consider adding in further antihypertensives if the blood pressure rises again.
Book patients in for annual reviews of end organ damage as this is an excellent overview of disease progression.
Hypertension is to be respected in light of its incredible contributor to morbidity and mortality. Never underestimate the importance of keeping a patient’s blood pressure within the desired range4.
An unquestioning belief in the power and efficacy of nature's healing remedies and processes, the placebo effect, disappointment and dissatisfaction with conventional medicines, outright rejection of orthodox treatments, convincing and persuasive advertising, reinforcement from others with similar views, endorsement by influential celebrities, perceived hand-me-down wisdom, bogus pseudoscientific claims, uncritical journalism, scare-mongering, feelings of desperation for a 'cure', and anecdotal case studies or surveys masquerading as research, are among the many reasons why patients and the public choose to alternative medicines either bought through local stores, pharmacies or on the internet. Over-the-counter drugs (OTCs) or over-the internet remedies are taken either with or without conventional medicines by millions of people every year and while most are harmless and safe to use there are inherent dangers of additive effects and interference with prescribed medications.
Benefits for patients who use OTCs include the convenience and sometimes less costly outlay on prescription drugs (analgesics, for example). Preparations may vary in price, according to the pharmaceutical provider. Self-treatment of minor ailments should theoretically lead to less pressure on GPs. Unfortunately, some patients tend to self-medicate for long periods (for example, analgesics) without visiting their GP for a health check to monitor the condition(s) for which they are using the OTC remedy to begin with. There is also the incorrect but widespread belief that because a prescription is not necessary to obtain these drugs they must be much less harmful than prescription-only preparations. Medicines may be used inappropriately, such as paracetamol for insomnia, or aspirin for stomach aches. Very often no record of OTCs is documented in the patient's notes. The list of OTCs is too numerous to cover in any detail here and for practical reasons the authors will concentrate on common legal products which most people are familiar with.
What causes the adverse effects?
An understanding of drug interactions gained momentum through the study of metabolizing enzymes. Cytochrome P450 inhibition or induction is probably the main mechanism for the pharmacokinetic interactions of drugs. CYP450 enzymes are haemoproteins (like haemoglobin) which comprise many related though distinct enzymes referred to as CYP. Over 70 CYP gene families have been described so far and are further divided into families and subfamilies of which CYP1, CPY2 and CYP3, are involved in hepatic drug metabolism.1 Thus, CYP3A denotes a cytochrome P450 enzyme that is a member of family 3 and subfamily A. It is abundant in liver and intestine. In the liver CYP450s are found mainly in the smooth endoplasmic reticulum. Any inhibition of CYP enzymes may result in enhanced plasma and tissue concentration of drugs, leading to toxicity. Likewise, induction may result in reduced drug concentration leading to decreased drug efficacy and treatment failure. Tricyclic antidepressants are substrates of 2D6 (CYP450 2D6 in full), which inactivates them by hydroxylation. For example, if a tricyclic antidepressant is given concomitantly with the serotonin/ noradrenaline reuptake inhibitor venlafaxine, the levels of the tricyclic antidepressant will rise because venlafaxine inhibits CYP450 2D6 and therefore prevents the breakdown of the tricyclic compound. Similar effects can occur with paroxetine, duloxetine, fluoxetine and atomoxetine. However, in clinical practice only atomoxetine requires dosage reduction when given with a 2D6 inhibitor. 2
Diphenhydramine (a common ingredient in sleeping tablets) in therapeutic doses inhibits CYP45 2D6-mediated metabolism of venlafaxine in humans. Venlafaxine has a low potential to inhibit the metabolism of substrates for CYP2D6 such as imipramine and desipramine compared with several of the most widely used SSRIs, as well as the metabolism of substrates for several of the other major human hepatic P450s 3 Of all the marketed drugs, about 60% are metabolized by the CYP450 system. The presence of the latter in red blood cells and hepatocytes contributes to the first-pass metabolism of drugs. This will have add-on effects when CYP450 inhibitors are simultaneously ingested. For example, grapefruit juice inhibits this enzyme system and therefore the bioavailability of drugs taken by mouth will increase causing a reduction in first-pass effect (presystemic metabolism).4
Common varieties of OTCs
Many commonly used OTC preparations (other than food supplements and analgesics) contain the ingredient dextromethorphan (related to codeine), used to treat coughs, colds and flu symptoms. Up to 125 different types of cold medicines contain dextromethorphan. It is an effective cough suppressant (antitussive) that works by raising the coughing threshold. It is not an analgesic. Cough syrups and tablet or capsule forms of medicine that contain dextromethorphan may lead to loss of coordination, dizziness and, nausea when used in high doses. Dextromethorphan is the d-isomer of the codeine analogue of levorphanol which mimics morphine. It is relatively nontoxic and its antitussive effects last for about 6 hours. It should be avoided when an MAO inhibitor is concomitantly given.
The generic term antihistamine refers in general to the H1 receptor antagonists used for inflammatory and allergic conditions. Sedation is a prominent feature of the H1 antagonist, diphenhydramine, used for allergies such as hay fever (short-term beneficial effect) and for symptomatic relief of the common cold. Guaifenesin, derived from the guaic tree, is a common ingredient found in cough expectorants. It is usually harmless though may cause problems in patients with compromised renal function. The mechanism of action, if any, is not known, save that it 'reduces viscosity’ of respiratory secretions.
OTCs believed to help weight loss, such as laxatives, diuretics and diet pills, are often purchased either for genuine health concerns or for misuse. All have serious and potentially fatal side effects if taken for a long time, particularly electrolyte disturbances.5Where diet pills are concerned problems may emerge insidiously with a few pills, quickly escalating to addiction. The alkaloid eephedrine is the principal active ingredient in the herb ephedra or ma huang. It is a potentially dangerous stimulant (sympathomimetic amine) contained in diet pills. Among the many possible side effects of diet pills are of course excessive weight loss with its attendant problems, alopecia, insomnia, and anxiety.
Used daily by millions of people worldwide, coffee and tea contain the methylxanthines caffeine and theophylline which act mainly by antagonism at purine receptors and by inhibiting phosphodiesterase. The effect is akin to a beta-adrenoreceptor agonist action. Caffeine is naturally found in certain leaves, beans, and fruits of over 60 plants worldwide. Its bitterness acts as a deterrent to pests. It can also be produced synthetically. Other than coffee and tea, the most common sources in the diet are cocoa beans, cola, and energy drinks. Product labels are required to list caffeine in the ingredients. Caffeine consumption in excess of 250mg daily produces symptoms indistinguishable from anxiety, including nervousness, irritability, tremulousness, muscle twitching, sleep disturbance, tachycardia, tachypnoea, palpitations, ectopic beats, and diuresis. A withdrawal syndrome can also occur and is associated with headache and a general muzziness. Caffeine may interfere with the effectiveness of drug treatment. For example, clozapine plasma levels can be raised, presumably through competitive inhibition of CYP1A2.6 In general terms, an average cup of brewed coffee contains 100mg caffeine per cup, Red Bull 80 mg/ 250ml per can, tea 45mg/ cup, instant coffee 60mg/ cup and filter coffee 120mg of caffeine per cup. Excess consumption of Red Bull may cause myopathy due to caffeine-mediated hypokalemia and rhabdomyolysis.7
Paracetamol (acetaminophen in the USA) is metabolized in the liver. It is probably the most common household analgesic and is present in a variety of preparations and is usually well tolerated. Drugs that increase the action of liver enzymes which metabolize it for example, carbamazepine, isoniazid, and rifampin, reduce the levels of paracetamol and decrease its action. Doses greater than recommended may result in liver damage and in overdose a potentially fatal hepatic necrosis can occur.
Not much is known about the contents of home medication cabinets (HMCs), the management of leftover medications, and the inclination of patients toward self-initiated treatment using non-prescription drugs. One cross-sectional study conducted in 72 Belgian community pharmacies revealed that the most frequently encountered categories of registered medicines were NAIDs, nasal decongestants, and drugs used for nausea. Despite their high prevalence, NSAIDs and non-opioid analgesics did not predominate (14%) among the most frequently used drugs: food supplements were used daily in 23.3% of households. Twenty-one per cent of the drugs were expired, 9% were not stored in the original container, and the package insert was missing for 18%. Self-medication, although generally acceptable in terms of indication and dosage, was commonly practiced, also with prescription drugs. Taking into account that younger people showed a significantly higher rate of self-medication, awareness of the risks of self-medication is warranted. 8
Relevance to Psychiatrists
Many psychiatric conditions are associated with excess alcohol use which complicates the picture when OTCs are used concurrently. Mixing alcohol with medication has the potential to cause nausea and vomiting, headaches, drowsiness, fainting, and loss of coordination. Because so many drugs can be bought without a prescription potential interactions with alcohol are often forgotten. Teenagers see OTCs as safer than illegal drugs and OTCs are sometimes taken to get a buzz or to help stay awake while studying. The home medicine cabinet allows quick access. Besides, parents will most likely have given an OTC preparation to their children for colds or other minor everyday ailments. Most drug education programmes however, focus primarily on illegal drugs, not OTC drugs and their potential for abuse.
Of some interest and importance to psychiatrists is the interaction when warfarin is combined with ginkgo (Ginkgo biloba) causing bleeding, a mild serotonin syndrome in patients who mix St John's wort (Hypericum perforatum) with serotonin-reuptake inhibitors. decreased bioavailability of digoxin when combined with St John's wort, induction of mania in depressed patients who mix antidepressants and ginseng, exacerbation of extrapyramidal effects with neuroleptic drugs and betel nut (Areca catechu); increased risk of hypertension when tricyclic antidepressants are combined with yohimbine. Disulfiram which inhibits aldehyde dehydrogenase inhibits the metabolism of warfarin. Metronidazole causes an unpleasant disulfiram-like reaction when mixed with alcohol. Consumption of 6-8 glasses of grapefruit per day may raise levels of carbamazepine and pimozide. Grapefruit juice is thought to the metabolism of many drugs and inhibition can last a number of hours. 9 The St John's wort component, hyperforin, contributes to the induction of CYP3A4. St John's wort also enhances the metabolism of other CYP3A4 substrates including the protease inhibitors indinavir and nevirapine, oral contraceptives, and tricyclic antidepressants such as amitriptyline. Other herbal remedies with the potential to modulate cytochrome P450 activity include ginseng, garlic preparations, and liquorice. 10Intake of St John's wort increases the expression of intestinal P-glycoprotein and the expression of CYP3A4 in the liver and intestine. The combined up-regulation in intestinal P-glycoprotein and hepatic and intestinal CYP3A4 impairs the absorption and stimulates the metabolism of cyclosporine, leading to subtherapeutic plasma levels.
The hormone melatonin plays a role in regulating the sleep-wake cycle but does not induce sleep per se. It is easily available through the internet and over-the-counter in the USA and many people use it for jet lag. Melatonin has side effects including diarrhoea, abdominal pain, headaches, nightmares, morning hangover, nausea, mild depression and loss of libido. Melatonin is used for many other complaints including tinnitus, depression, chronic fatigue syndrome (CFS), fibromyalgia, migraine and other headaches. Valerian root, a medicinal herb has been known to cause liver damage and should be used with caution. It too is most commonly used for insomnia and frequently combined with hops, lemon balm, or other herbs.
Many complementary medicines prescribed for anxiolysis/sedation (e.g. kava kava, valerian, passion flower and chamomile) are GABAergic, GABA (formed from glutamate) being the major inhibitory mediator in the brain, though for some, such as hops, the mechanism of action remains unknown. As expected, all remedies can lead to drowsiness when taken in high doses and can potentiate the effect of synthetic sedatives.11 Kava has been taken off the market because of its hepatoxicity.
Although sufficient dietary fibre and water are effective for the treatment of constipation some patients fear they are building up 'toxins' if they do not have 'regular' bowel habits. Often constipation is caused by opiate analgesics which are widely available, and in many cases patients are using antidepressant/psychotropic medication concurrently. The tendency to misuse laxatives is commonly seen in anorexia nervosa though is not confine to that disorder. The osmotic laxative lactulose is a disaccharide of galactose and fructose and therefore care is needed where diabetic patients are concerned particularly if they are taking neuroleptic medications such as clozapine or olanzapine. Abdominal cramps and diarrhoea can occur with high doses. Laxatives have the potential to interfere with potassium levels, usually causing hypokalemia. 5
Ordinary foods and drinks may interfere with prescribed medications.12 Grapefruit juice reduces the metabolism of calcium channel antagonists. Vegetables such as broccoli, cabbage, and Brussels sprouts are putative cytochrome P450 inducers and are known sources of vitamin K. Red wine, ethanol and cigarette smoke are also believed to induce the cytochrome P450 system and have the potential to interfere with the metabolism and catabolism of many drugs. Smoking interferes with clozapine metabolism. When smokers are prescribed clozapine abrupt smoking cessation may lead to high plasma concentrations with potentially serious consequences. Clozapine plasma concentrations can rise 1.5 times in the 2–4 weeks following smoking cessation.13 and in some instances by 50–70% within 2–4 days. Where baseline plasma concentrations are higher, particularly over 1 mg/litre, the plasma concentration may rise dramatically owing to non-linear kinetics. If patients smoking more than 7–12 cigarettes per day while taking clozapine decide to quit, the dose may need to be reduced by 50%.14 Smoking also interferes with duloxetine levels due to an induction of CYP1A2 by hydrocarbons contained in tobacco smoke. It cannot be expected that patients would be aware of these facts, let alone understand the pharmacology of the multitude of chemicals contained in OTCs. 15
Availability does not mean harmless
Most people using OTCs are unaware of the potential for harm. Herbal remedies, for instance, with their attractive packaging, convey the impression of being beneficial merely because they contain 'earth minerals' and other 'natural ingredients:' therefore they must be beneficial for health, rather like eating vegetables or taking vitamins.10 There are numerous instances of drug interactions and many preparations may contain contaminants such as mercury, lead, and arsenic. One of the commonest ingredients in many lotions and potions is hydrocortisone, which if used liberally may cause skin atrophy. The most worrying aspect of OTCs is that they give hope to people with serious conditions which might be better treated with conventional medicines - multivitamins for cancer, mineral supplements for constipation, and so forth. With buzz words such as 'healing, energy, vitality, harmony, body balance, healthy living, total well-being, holistic', and 'traditional', targeting the sometimes gullible consumer, OTCs become very appealing. Others are taken in by the pseudoscientific jargon, 'healing powers, purifying the blood, eliminating toxins from the bowel', boosting one's immune system, and so forth. The outcome can be serious: for example, Chinese herbal medicines containing extracts from Aristolochia plants have been implicated in the high incidence of urinary tract cancer in Taiwan, a study has suggested 16 because aristolochic acid has a consistent pattern of inducing DNA damage.
Some patients may be coincidentally taking conventional, proven medicines yet attribute their improved health to the alternative remedy. Other beneficial factors which are often conveniently ignored include a change in diet, increased wellbeing through physical exercise, or going on holiday! There is of course, the natural remission of the illness, particularly with transient viral infections, or unexplained lower back pain, to cite two instances. 17
Some common problems
Although the dangers of the common analgesics are relatively well-known (paracetamol causing liver damage, gastrointestinal upset with ibuprofen), patients are often unaware of the potential for adverse effects with other preparations. Nor are they always aware that many compounds combine two analgesics, for example, paracetamol and aspirin, or paracetamol and ibuprofen. Nonsteroidal anti-inflammatory drugs (NSAIDs) interfere with renal clearance and may result in elevated lithium levels with resultant toxicity.18 Combined use of an antidepressant or sodium valproate with an OTC could lead to abnormal liver function tests attributed solely to the former agents and not the OTC. Even reading the label does not guarantee insight and understanding of what is on offer. Labels are carefully and handsomely packaged by advertisers to persuade people their product is better than conventional medicines. Most consumers spend little time reading the labels about ordinary foodstuffs, never mind the chemical constituents of OTCs. In transplant patients, self-medication with St John's wort (hypericum perforatum) may lead to a drop in plasma levels of the immunosuppressant drug cyclosporine, causing tissue rejection. In the US, the Food and Drug Administration (FDA) with branches in other cities, including London (European Medicines Agency) approved a regulation in 1999 requiring that all OTC drug labels contain certain information such as ingredients, doses and warnings in a standardized format. This covers thousands of non-prescription products, including sunscreens. In the same way that people understand the nutritional value of foods, it is hoped that its efforts will help people use OTCs safely.
Sexual side effects are a frequent accompaniment of psychotropic drugs and patients are often bothered by impotence to such a degree they resort to surfing the internet to acquire sildenafil (Viagra) and the like. Such over-the-internet medicines are easy to acquire. Carbamazepine and St John's will decrease the level of sildenafil by competition with CYP3A4. Ketoconazole, the antifungal agent, works in a similar mechanism and may in increase the levels of citalopram. Metronidazole has a disulfiram-like reaction with alcohol.
There is also the problem of addiction with OTCs because of ease of access to opioid compounds. Patients often do not perceive them as having addictive potential. Preparations containing ephedrine or dextromethorphan can be abused. Ephedrine is still used as a nasal decongestant. As an indirectly-acting sympathomimetic amine it can react dangerously with monoamine oxidase inhibitors because of the increased amount of noradrenaline stored in noradrenergic neurones. Opioids may be crushed and the powder snorted or injected leading to euphoria or elation, followed by addiction when compulsive use takes over. Patients may be subject to mood swings making underlying psychiatric disorders and drug treatment difficult to manage. Opioids produce drowsiness, and depress respiration in high doses. The combination with sedative psychotropic medication such as mirtazapine, olanzapine or quetiapine could be deleterious especially where there is concomitant weight gain. Buspirone (a 5-HT1A receptor agonist used for anxiety) may interact with monoamine oxidase inhibitors (MAOIs), such as isocarboxazid, phenelzine, and tranylcypromine. Use of buspirone with these drugs can increase blood pressure. The combination of buspirone and trazodone may raise LFTs. The combination of buspirone and warfarin may accentuate the effects of warfarin and increase the risk of bleeding. Patients taking buspirone should not drink grapefruit juice, since even some time after a dose is taken, the amount of buspirone in the blood may be increased. Carbamazepine increases the metabolism of the pill reducing its effectiveness. The pill is more easy to acquire now (clinics and/or the Internet) and therefore unexpected pregnancies may occur in patients taking both. Cimetidine may increase the blood levels of sertraline by reducing its elimination by the liver. St John's wort interacts with the metabolism of the pill and this can result in unwanted pregnancies.
Overall OTCs are generally safe, though not where young children and pregnant women are concerned. Vitamins are safe unless taken in very high doses. Deficiency is rare in developed countries (apart from vitamin D) and therefore they are often taken unnecessarily 'to achieve balance' or for 'vitality and energy', and other eye-catching spurious claims. Glucosamine, an amino sugar, seems to be the most popular OTC dietary supplement for the treatment of osteoarthritis. It is naturally present in shellfish and in some fungi. Apart from occasional allergic reactions and mild gastrointestinal symptoms, it is generally innocuous, though conclusive evidence for its efficacy in osteoarthritis is lacking. Fish oil supplements usually come from mackerel, herring, tuna, halibut, salmon and cod. There is some evidence that omega-3-fatty acids contained in fish oils are beneficial for cardiovascular problems but more trials are needed. Side effects are minimal and include mild gastrointestinal upset. 19
Doctors' dilemma
Is there a solution? Probably not, though one way to increase consumers’ awareness of the dangers associated with OTCs could be to change their status to match that of drugs such as simvastatin—they would still be sold over the counter, but with a pharmacist’s supervision. The list of OTCs is rising leading to increased intake of phytochemicals in addition to the usual gamut of medicines used to treat upper respiratory infections. Potentially fatal interactions can occur with OTCs and traditional drugs. Providing better training for pharmacy staff, and restriction of the quantity sold per costumer, should also be considered, though with so many retail outfits selling these products this is probably unrealistic. Besides, many of these products are available on the shelves, not necessarily at the pharmacy counter.
The most common addictions are combinations of opioids with standard analgesics. The Internet is an easy source for prescription drugs, increasing their availability and eliminating the need to see a doctor. Is there an epidemic of prescription opiate use? It is difficult to tell. Effective prevention, public information, and treatment policies require sound epidemiological data about drug use to ensure policy-making is not distorted by stories of celebrity arrests and media-generated hysteria which tend to give that impression that use of illegal drugs is rife. The lack of knowledge about the ubiquitous presence of unknown ingredients in OTCs may be a source of concern in the future when even more become easily available.
It is difficult for doctors and other health care professionals to advise patients on the effectiveness and safety of OTCs. The numbers of well-designed studies available for review are limited, often conducted in a small number of healthy participants, and for short time periods only.20 A survey conducted by questionnaire in 238 follow-up UK rheumatology outpatients in three centers found nearly half (44%) had taken various herbal remedies or over-the-counter (OTC) preparations over the past 6 months. The most commonly used were: cod-liver oil, glucosamine, and/or chondroitin, and evening-primrose oil. Rheumatology outpatients have a particularly high risk of interactions with conventional medication because of polypharmacy and comorbidity. Gingko biloba, devil's claw, ginger, and garlic may have antiplatelet or anticoagulant effects and may exacerbate the gastrointestinal bleeding risk of nonsteroidal anti-inflammatory drugs (NSAIDs) or corticosteroids. Echinacea (taken by 4%) may be hepatotoxic and could exacerbate the adverse effect of disease-modifying antirheumatic drugs (DMARDs). Most patients are unaware of the potentially harmful interactions.21
The authors carried out a small audit of consecutive outpatients and staff on a random basis seen in our unit. Of the 45 people who completed the questionnaire 70% affirmed use of OTCs, either presently or in the past. A high percentage (73%) had never been asked by their GP about these 'alternate medicines' and among health professionals 25% never enquired from patients about the use of OTCs. More than half (63%) were unaware of possible side effects before taking them and nearly 50% had not considered that the OTCs might interact with prescribed medication. As would be expected, the majority of users (84%) did not experience any side effects. Nonetheless 16% experienced unpleasant adverse effects such as tachycardia, nightmares, drowsiness, cough, constipation, and exacerbation of asthma. When asked who had recommended the preparation/s the response was generally, 'friends' or 'I knew about it myself'. When asked why they bought it over-the-counter, the response was 'just in case I need it', 'cheaper than prescription' 'it is a natural remedy' 'it's only Nurofen'. As with most surveys, the commonest preparations were analgesics, laxatives, glucosamine for arthritis, and decongestants. Others bought OTCs to promote good health because they are herbal and natural', for example, ginkgo biloba. In a separate random survey of 50 consecutive outpatients carried out by FJD and N El-H, some 40% were taking herbal remedies.
Conclusion
Medical care has become fragmented in recent years. The family doctor of old no longer acts as a gatekeeper to coordinate medications patients are prescribed. A gynaecologist may prescribe the pill to a patient and a walk-in clinic may prescribe an antibiotic to the same patient. How does a doctor inform the patient that antibiotics decrease the effectiveness of the pill if the doctor is unaware of the myriad of other supplements including OTC medications, a patient is taking? Although a patient should bear some responsibility, in reality he/she may not have the expertise to discern the complications and interactions of medications. Besides multiple use of preparations is more often a problem of older age groups who frequently have many health problems. The family pharmacist has also been lost to mail-order pharmacies and sometimes suspect internet web sites. Because of the increase in numbers of prescriptions and OTCs, doctors and pharmacists are using computer programs to establish what is safe and what is not.
Strategies to mitigate these problems could include more general enquiries about prescriptions, OTC, and herbal drug use at the initial examination. 22 Even though some patients may be aware of the potential for drug misuse, others are naive and do not realize the harm involved. Providing containers to enable patients to dispose of unused or unneeded prescriptions or OTC medications is another tactic. Treating the underlying causes (of pain, for example) more aggressively may obviate the need for patients adding OTCs to their drug list. Practicing careful record keeping of prescription refills and tightening controls over prescription blanks are other practical measures. Where patients have become addicted to medications, programmes such as Narcotics Anonymous may help.
Prolactin is a polypeptide hormone that is secreted by lactotrophs of the anterior pituitary gland. Prolactin secretion shows a circadian rhythm1, with highest levels occurring during the night and the nadir occurring during the afternoon and eveningThe best known function of prolactin is the stimulation and maintenance of lactation.
Normal basal levels of serum prolactin are approximately 20 to 40 ng/ml in women (depending on the phase of their menstrual cycle), and 15 ng/ml in men. However, these concentrations can also vary with ageHyperprolactinemia is diagnosed when serum prolactin concentrations are >20-25 ng/ml (400-500 mU/l) on two separate occasions3.
Hyperprolactinemia is the most common disorder of the hypothalamic-pituitary-gonadal (HPG) axis4 and can have physiological causes - pregnancy, nursing, sleep, stress, sexual intercourse or pathological causes - tumor called prolactinomaMultiple factors are involved in prolactin secretion (Figure 1). However, hyperprolactinemia is also a common side-effect of traditional antipsychotics (e.g. haloperidol) and is associated with the use of some newer second generation agents2, 6.
Figure 1: Factors involved in Prolactin secretion
The prevalence of hyperprolactinemia is low in the general population (0.4%), but it can be as high as 9 to 17 % in women with reproductive disordersThe disease occurs more frequently in women than in men, multiple signs and symptoms associated with hyperprolactinemia (Table 1).
Multiple variables affect probability of development of breast cancer (Table 2) and a number of important factors determine the risk for breast cancer, and the most important of these seem to be related to estrogen and possibly prolactin (Table 3).
Sexual dysfunction: decreased libido, impaired arousal, impaired orgasm
Acne and hirsutism in women (due to relative androgen excess compared with low estrogen levels)
Behavioural effects
Decreased bone mineral density (BMD) which may lead to increased risk of osteoporosis.
Table 2: Probability of Developing Breast Cancer32 Risk of Breast cancer
Variables
Increased
Decreased
Age
Older
Younger
Socioeconomic status
Higher
Lower
Family history of breast cancer
Present
Absent
Racial
Caucasian
Oriental
Geographic
America
Asia
Marital status
Single
Married
Age at first pregnancy
Older
Younger
History of multiple pregnancies
Present
Absent
Age at menarche
Younger
Older
Age at natural menopause
Older
Younger
Artificial menopause
Absent
Present
Table 3: Epidemiology of breast cancer7
Age of menarche
Late pregnancy
Obesity
Caucasian females have slightly higher incidence
The highest incidence of breast cancer occurs after age 35, with 83% of the cases occurring after age 50 and only 1.5% under age 30
1 in 11 women will develop breast cancer sometime during their lifetime
The highest incidence of breast cancer in the US is found in the northeastern part of the country
The women with previous cancer of one breast are at risk for cancer in the opposite breast
A woman whose natural menopause occurs before age 45 has only half the breast cancer risk of those whose menopause occurs after the age of 557.
Methods
Pubmed.gov searched by using keywords
Antipsychotics and Hyperprolactinemia
Hyperprolactinemia is caused by these agents by blocking D2 receptors on lactotrophs and thus preventing inhibition of prolactin secretion. Furthermore, it has been suggested that the degree of elevation of prolactin correlates with the degree of occupation of D2 receptors in excess of 50%8.
Most studies have shown that conventional antipsychotics are associated with a two to tenfold increase in prolactin levels9, 10. In general, second generation antipsychotics produce a lower increase in prolactin than conventional agentsAmong second generation antipsychotics associated with increased prolactin are amisulpride, zotepine and risperidone11, 12, 13.
Antipsychotic induced Hyperprolactinemiaand Breast cancer
Prolactin is known to increase the incidence of spontaneously occurring mammary tumors in mice14 and increase the growth of established carcinogen-induced mammary tumors in rats15.
Prolactin and other sex hormones such as, estradiol and progesteroneare important in normal mammary gland growth and developmentas well as lactation. Both animal and in vitro data suggestthat prolactin is involved in tumorigenesis by promotingcell proliferation, increasing cell motility,and improving tumor vascularization. Whereas prolactinand its receptor are found in normal and malignant tissues,concentrations of both are generally higher in malignant tissue16.
Several studies have linked hyperprolactinemia to an increased risk of breast cancer in women17, 18. Mechanisms that have been suggested to explain this possible action of prolactin include the increased synthesis and expression of prolactin receptors in malignant breast tissue and a prolactin-induced increase in DNA synthesis in breast cancer cells in vivo18.
One of the hypothesized roles of prolactin in the development of mammary tumors is to create mammary gland conditions favorable for the action of carcinogens through its stimulation of the rate of mammary gland DNA synthesis, a measure of the frequency of mammary gland cell division19.
Several epidemiological studies have investigated whether female psychiatric patients receiving treatment with antipsychotics have a higher incidence of breast cancer but results have been conflicting. However, the most recent and methodologically strong study, found that antipsychotic dopamine receptor antagonists conferred a small but significant risk of breast cancer. This study had a retrospective cohort design and compared women who were exposed to prolactin-raising antipsychotics with age-matched women who were not20.
Conversely, other studies have shown no correlation between hyperprolactinemia and breast cancer21, 22. Furthermore, as most breast cancers are thought to be fueled by estrogen23, and hyperprolactinemia causes estrogen deficiency24, it is perhaps surprising that hyperprolactinemia has been linked with an increased risk of breast cancer. Indeed, post-operative hyperprolactinemia in breast cancer patients has been shown to improve disease free and overall survivalObviously, more studies are necessary to define any possible links between hyperprolactinemia and breast cancer.
In view of these problems it would be of interest to go around the contentious issue of possible carcinogenic effects of dopamine antagonists using a classical condition of dopamine loss or attenuation as in Parkinson's disease (PD). Using computerized registers of death data of the National Center of Health Statistics for years 1991 through 1996, estimated 12,430,473 deaths of persons over forty, and extracted 144,364 cases with PDTellingly, PD patients showed a highly significant reduction of overall cancer incidence. PD resistance to breast cancer might conceivably be attributed to dopaminergic treatment antagonizing hyperprolactinemia26, 27, 28.
Another recent study showed that dopaminergic therapy inhibits angiogenesis thereby acting as an anti-tumor agent29.
Epidemiological studies of women who have received prolactin-releasing drugs such as reserpine and perphenazine have not disclosed increased risk30.
Antipsychotic induced Hyperprolactinemia and Other cancers
Antipsychotics have been hypothesized to account for the reduced cancer occurrence observed in patients with schizophrenia in a number of studies. This reduction has been found primarily in men in smoking-related cancers, and in prostate and rectal cancer.
In addition, a study found a reduced risk of rectal cancer in both men and women as well as indications of a reduced risk of colon and prostate cancer in this population-based cohort of neuroleptic users. Reassuringly, they observed no increased risk of breast cancer in female users31.
Comments and recommendations
Hyperprolactinemia results from treatment with any drug that disrupts dopaminergic function on the HPG axis and is not limited to the use of antipsychotics.
Management of supposed anti-psychotic associated hyperprolactinemia should exclude all other causes, involve regular monitoring of adverse effects and include a regular risk-benefit discussion with patient.
Switching the patient to prolactin-sparing antipsychotic (i.e. Aripiprazole, Olanzapine, Quetiapine or Clozapine) usually proves effective, though there is also a risk of relapse.
It seems prudent to avoid prescribing prolactin-raising antipsychotics in patients with past history or family history of breast cancer.
It is premature to mandate warning patients of an unknown and undemonstrated increase in the risk of developing breast cancer associated with neuroleptic treatment.
Before initiating antipsychotic treatment a careful examination of patient is necessary.
One should examine the patient for evidence of sexual adverse events, including menorrhagia, amenorrhoea, galactorrhoea and erectile/ejaculatory dysfunction. If evidence of any such effects is found, then the patient's prolactin level should be measured.
Patient history, physical examination, pregnancy test, thyroid function test, blood urea and creatinine level can help determine if other etiologies are responsible.
Presence of headache and visual field defects is suggestive of a sellar space-occupying lesion (MRI indicated), but the absence of these features does not exclude such pathology.
History of menstrual cycling (duration, amount and intervals of menstruation) as well as lactation and sexual functioning should be taken before antipsychotic medication is initiated.
Obtain a pretreatment prolactin level, which one can compare with subsequent samples if the patient develops symptoms associated with relatively modest hyperprolactinemia.
The risk-benefit ratio for treatment of antipsychotic-induced hyperprolactinemia needs to be assessed on an individual basis.
If there is doubt about the cause of the hyperprolactinemia, patient should be referred to an endocrinologist.
Current recommendation
A rise in prolactin concentration should not be of concern unless complications develop, and until such time no change in treatment is required.20
Conclusions
There is no definitive data suggesting increased risk of breast cancer available at this time, thus author concludes:
Further prospective studies are needed in this area, with large number of patients, before a more definitive answer can be provided.
Detection of existing mammary tumor by breast examination or studies (mammogram) is recommended prior to administration of neuroleptics.
Development of newer antipsychotic drugs that do not increase serum prolactin level may be indicated.
Strengths
Each article found by search term was reviewed
Data were extracted from each article to find answer of research question
Pubmed.gov is a huge database for search.
Limitations
This literature review has been conducted by a single author, thus bias on part of author cannot be ruled out
Author was limited by time to review articles available in other databases.
Key Points
Most studies report no increased risk of breast cancer associated with use of these medications.
Only one study reported a positive correlation between neuroleptic induced hyperprolactinemia and increased risk of breast cancer.
Other studies report inconclusive data.
At this time we do not have definitive data suggesting increased risk of breast cancer secondary to hyperprolactinemia caused by antipsychotics.
Further prospective studies are desirable.
Author concludes that thorough screening of patient should be best desirable before starting of antipsychotics to avoid any add-on risk.
A 73 year old male retired civil servant with a background of spinal bulbar atrophy and hypertension presented to his General Practitioner (GP) for a routine health check. He was taking bendroflumethiazide, propranolol, atorvastatin and aspirin. His brother also has spinal bulbar atrophy.
The GP sent routine blood tests, which came back as follows: Haemoglobin 8.5 (13-17g/dL), Mean Cell Volume 84.9 (80-100fl), White Cell Count 3.4 (4-11 x10^9/L), Neutrophil Count 0.68 (2-8 x10^9/L), Platelets 19 (150-400 x10^9/L). A random blood sugar reading was 18 (3.9-7.8mmol/L). Renal function, bone profile and hepatic function tests were normal. The General Practitioner referred the patient urgently to the local Haematology unit for further assessment.
On further review the patient complained of tiredness but had had no infections or bleeding. There were no night sweats or recent foreign travel. Physical examination was unremarkable, with no lymphadenopathy or organomegaly.
A blood film showed marked anaemia with red cell anisopoikilocytosis, prominent tear drop cells and neutropenia with normal white cell morphology. There were no platelet clumps. A diagnostic investigation followed.
QUESTIONS
What are the differential diagnoses of pancytopenia and which causes are likely here given the findings on examination of the peripheral blood film?
Infections - Viral infections including cytomegalovirus, hepatitis A-E, Epstein-Barr virus, Parvovirus B19 and non A-E hepatitis viruses can cause aplastic anaemia1. The classical picture would be pancytopenia in a young patient who has recently had ‘slapped cheek syndrome’ from parvovirus B19 who has transient bone marrow aplasia. Tropical infections such as visceral leishmania may cause pancytopenia, splenomegaly and a polyclonal rise in immunoglobulins2. Overwhelming sepsis may also cause pancytopenia with a leucoerythroblastic blood film (myeloid precursors, nucleated red blood cells and tear drop red cells). HIV is also an important cause of cytopenias.
Medications - common medications may cause aplastic anaemia, such as chloramphenicol, azathioprine and sodium valproate. The history in this case did not have any recent medications introduced. The other very common cause of pancytopenia in modern practice would be in the context of chemotherapy.
Bone marrow disorders - tear drop cells are a key finding and clue in this case. They suggest an underlying bone marrow disorder and stress. In the context of a known active malignancy they are nearly indicative of bony metastases. Our patient did not have a known malignancy and there was nothing to suggest this on the history or physical examination, although in a man of this age metastatic prostate cancer should be considered. Other bone marrow disorders that would need to be considered are acute leukaemia (which was the diagnosis here), myelodysplasia and myelofibrosis. Splenomegaly would be especially significant in this case as it would be highly suggestive of myelofibrosis in combination with tear drop cells wit pancytopenia in an elderly patient3.
B12 and folate deficiency – this may cause pancytopenia, tear drop cells and leucoerythroblastic blood findings4&5. The mean corpuscular volume in this case however is normal which would somewhat argue against B12 and folate deficiency, as well as the fact that there were no hypersegmented neutrophils seen on the blood film. This cause is however very important given how it is easily reversible and treatable.
Haemophagocytosis – this is a bone marrow manifestation of severe inflammation and is a manifestation of systemic disease6. It has various causes including viruses (e.g. Epstein Barr virus), malignancy and autoimmune disease. It should be considered in patients with prolonged fever, splenomegaly and cytopenias. It is diagnosed by characteristic findings on bone marrow biopsy.
Paroxysmal nocturnal haemoglobinuria – this is a triad of pancytopenia, thrombosis and haemolysis caused by a clonal stem cell disorder with loss of membrane proteins (e.g. CD55 and CD59) that prevent complement activation7.
Genetic disease – Fanconi anaemia is a rare autosomal recessive disease with progressive pancytopenia, malignancy and developmental delay. It is caused be defects in DNA repair genes.
The key finding in this case was tear drop cells on the blood film. These are part of a leucoerythroblastic blood picture seen in bone marrow disease, malignant marrow infiltration, systemic illness and occasionally haematinic deficiency. See above for why this is unlikely to be haematinic deficiency. Although tear drop cells can occur in systemic illness such as severe infection, the history here was not in keeping with this. The diagnoses remaining therefore are malignant bone marrow infiltration or a primary bone marrow disorder (myelodysplasia, acute leukaemia or myelofibrosis). There were no features in the history pointing towards a metastatic malignancy and therefore primary bone marrow disorder is the most likely diagnosis. The diagnosis was later established as acute myeloid leukaemia on bone marrow examination.
What investigations would help to confirm or eliminate the possible diagnoses?
Blood tests including a clotting screen, liver function tests, inflammatory markers and renal function shall help to exclude other systemic disease such as disseminated intravascular coagulation, sepsis, liver disease and thrombotic thrombocytopenic purpura which may all give rise to cytopenias. Autoimmune screening may also suggest vasculitis which can cause cytopenias.
Microbiology studies including virology tests (e.g. human immunodeficiency virus, Epstein Barr virus and hepatitis viruses) may also be requested as appropriate given the clinical scenario and findings. Visceral leishmania should be tested for according to travel history and clinical likelihood. Leishmania may be identified through serology and light microscopy (for amastigotes) or polymerase chain reaction of the bone marrow aspirate. Tuberculosis could be cultured from the bone marrow is suspected.
Haematinics are a crucial test and the aim should be to try and withhold transfusion until these results are known in case they can easily be replaced thereby negating the need for blood products. Remember that if haematinics are not tested before transfusion then the blood products will confound the tests results.
Bone marrow biopsy, including aspirate and trephine are a crucial investigation for morphological examination and microbiological testing if indicated. This will distinguish the bone marrow disorders including acute leukaemia, myelofibrosis, bone marrow metastatic infiltration and myelodysplasia. Haemophagocytic syndrome may also be suggested by bone marrow examination findings.
Imaging if there is suspicion of an underlying malignancy (e.g. CT chest, abdomen and pelvis) and then further blood tests such as the prostate specific antigen. Ultrasound could also be used to check for splenomegaly where clinical examination has not been conclusive.
Medication review is vital as this may reveal the diagnosis (e.g. use of chloramphenicol)
Flow cytometry may be considered to investigate for an abnormal clone in the case of paroxysmal nocturnal haemoglobinuria and may be used on bone marrow samples to further evaluate the cells.
Unless a very clear cause for the pancytopenia is obvious (e.g. haematinic deficiency or malignant infiltration) then bone marrow examination is crucial for establishing a diagnosis. This will also prevent inappropriate treatments being initiated.
What immediate management steps and advice would be given to this patient?
General measures for pancytopenia include blood product support. Red cells and platelets can be given for symptomatic anaemia and bleeding. There is no need to transfuse platelets in the patient if there are no signs of bleeding. Alternatively he could also be treated with tranexamic acid as an alternative to avoiding risks associated with platelet transfusion. Infection should be treated urgently. Due to the neutropenia he should be advised to seek medical help if he develops a fever or sore throat. He should urgently be followed up in clinic with the results and given the contact details for the haematology department in the interim period in case he develops any problems.
The specific treatments for pancytopenia rests on the exact cause found after investigation. In this case the diagnosis was acute myeloid leukaemia arising from a background of myelodysplasia. The treatment for acute myeloid leukaemia in general, with curative intent, would consist of induction chemotherapy with DA (daunorubicin and cytosine arabinoside) followed by consolidation with further chemotherapy, the type of which (e.g. high dose cytosine arabinoside or FLAG-Ida) would depend on the risk assessment of the disease and possible consideration of an allograft bone marrow transplant after consolidation. Currently different approaches to consolidation chemotherapy, transplantation and small molecule inhibitors are being evaluated in clinical trial (e.g. AML 17 clinical trial).
The other options, in older more frail patients where high dose chemotherapy will be very toxic, are low dose palliative chemotherapy and support with transfusion.
PATIENT OUTCOME
He has been supported with blood products (platelets and packed red cells for bleeding and anaemia respectively). After discussion with him and his wife he has elected to have palliative chemotherapy with low dose cytosine arabinoside. He will be seen regularly in the haematology clinic and day unit for review. We do not suspect a link between the leukaemia and spinal bulbar atrophy.
Claiming historic triumph, that has defined his presidency ever since, President Barak Obama signed the $ 1 trillion Patient Protection and Affordable Care Act (in short called as ACA) in a highly visible white house ceremony on Tuesday March 23, 2010 (using 20 different pens) thereby establishing health care as a ‘right’ of every American for the first time. It took all the legislative and political skills from him to get the bill passed through both houses of Congress as was suggested on these pages previously.1
Soon after President Obama signed the landmark legislation, 26 States filed law suit contesting that the health care legislation, which earned the nickname of “Obamacare” from its opponents, was unconstitutional for several reasons. The legal challenge created significant uncertainty about the viability and future implementation of the legislation. There was also growing concern about the law’s impact on the national debt that became, and continues to be, an extremely divisive issue between the Democrats and the Republicans in the US Congress.
The law suit finally, as expected, made its way to the Supreme Court of the United States. The Court was looking at the legislation from three angles. First, at the core of the legislation, was the requirement that nearly all Americans obtain health insurance by 2014 or face a financial penalty- a provision that came to be known as the ‘individual mandate’ . The penalty would be recycled into “health exchanges” providing alternate options to low income Americans and small businesses for purchase of health care. The ‘individual mandate’ was the backbone of the legislation that wouldcover millions of uninsured Americans, majority of whom would be healthy young individuals. And, if this ‘individual mandate’ were to be struck down by the Court (as was expected by many), then the second question would be what happens to the rest of the ACA as insurance industry was supporting the legislation since it would provide them with tens of millions of new healthy ‘customers’. If such healthy individuals were left out, the pool would have mostly sicker individuals compromising the profitability, and perhaps the very viability, of the insurance industry. The third issue related to the mandate for the states to accept a large number of individuals into Medicaid program that provides health care for the poor and those with income of up to 133 % of the federal poverty level.2
The Supreme Court, mercifully for President Obama, upheld virtually the entire legislation in its historic decision on June 28, 2012. The four ‘liberal justices’ ( Justices Stephen Breyer, Ruth Ginsburg, Elena Kagan and Sonia Sotomayor) were joined by the conservative Chief Justice John Roberts in upholding the ‘individual mandate’. In what many observers of the court called a surprising twist, the justices held that the mandate was not constitutional under the ‘interstate commerce clause’, as argued by the administration, but was constitutional under Congress’ power of taxation. The other four dissenting conservative justices (Justices Samuel Alito, Anthony Kennedy, Antonin Scalia and Clarence Thomas) held that the Congress had exceeded its authority on several levels.
The reaction to the ruling was prompt and mixed. Dr. Jeremy A. Lazarus, President American Medical Association said “The AMA has long supported health insurance coverage for all, and we are pleased that this decision means millions of Americans can look forward to the coverage they need to get healthy and stay healthy”. The President and CEO of American Hospital Association, Mr Rich Umbdenstock, said “The decision means that hospitals now have much-needed clarity to continue on their path toward transformation”Perhaps the President of the American College of Physicians stated it best “We hope that a day will come when the debate will no longer be polarized between repeal on one hand, or keeping the law exactly as it is on the other, but on preserving all of the good things that it does while making needed improvements.”
The President of the U.S. Chamber of Commerce, Thomas J. Donohue, lamented that “While we respect the court’s decision, today’s Supreme Court ruling does not change the reality that the health care law is fundamentally flawed. It will cost many Americans their employer-based health insurance, undermine job creation and raise health care costs for all.” And in a scathing statement, the President of National Federation of Independent Business, Dan Danner, echoing the sentiments of a growing number of small businesses said “Under [the ACA], small-business owners are going to face an onslaught of taxes and mandates, resulting in job loss and closed businesses. We will continue to fight for the repeal of [the ACA] in the halls of Congress; only with [the ACA’s] full repeal will Congress have the ability to go back to the drawing board to craft real reform that makes reducing costs a No. 1 priority.” 3
This line of argument, apart from bringing some uncertainly, provided politicians fodder as they move closer to the Presidential elections. And as expected, the Republicans (who control the House of Representatives) passed legislation repealing the law but the bill died in the US Senate (controlled by Democrats). And the Republican Presidential candidate, Gov. Mitt Romney, framed the decision as a political call to arms. “What the Court did not do on its last day in session, I will do on my first day if elected President of the United States. And that is I will act to repeal Obamacare.” While both President Obama and Gov. Romney agree that Medicare costs have to be reined in but there is fundamental difference in their approach to cost cutting. President Obama’s plan relies on a powerful board to reduce payments to service providers and gradually changing how hospital and doctors are paid in order to eliminate fee for service and establish pay for performance (pay for the quality, not quantity). Gov. Romney would limit the amount future retirees would receive from the federal government to approximately $ 7000 (also called as Voucher System) and relying on the private industry to find an efficient solution.
It is clear that Gov. Romney, who previously implemented ‘Obamacare’ type of legislation as the governor of Massachusetts, flipped his position to appease extreme right wing of the Republican Party. As usual politics trumps policy. This drama continues to play out as we get closer to the election on November 6, 2012.
As identified by an independent nonpartisan educational institute based in Washington, The Centre for American Progress (CAP), some of the popular provisions of the law are 4:
The law provides for young adults to stay on their parents’ insurance to age 26 enabling 2.5 millionyoung Americans toenrol on their parents policies (73 % of young adults now have coverage as a result of this provision);
For seniors living on fixed income (Medicare patient population), one of the immediate benefits of the ACA was the closure of prescription drug coverage gap (known as the ‘donut hole’) saving 4 million seniors about$ 2 billion on prescription drugs or approximately $604 per person, in 2011 alone;
The law provides $11 billion to support and expand community health centres nationwide. More than 350 new community health centres were established in 2011 serving 50 million Americans in medically underserved areas;
Starting in 2014, the law prohibits health insurance carriers from excluding and/or denying coverage or charging higher premiums and limiting benefits to those with pre-existing medical conditions (as happens currently in too many cases);
50,000 Americans have already enrolled in the Pre-existing Condition Insurance Plan (PCIP) that ensures medical services including prescription drugs for those with pre-existing conditions as soon as possible ;
Provision of $200 million to expand school-based health centres for primary care, dental care, behavioural health services and substance abuse counselling;
In 2011 alone, 85 million Americans benefitted from preventive services included in the legislation. Many more will benefit since a major provision of the preventive services for women took effect in August 2012;
However, several components of the legislation remain unclear and their impact rather unknown. As an example the Obama administration fought hard for formation of the Independent Payment Advisory Board (IPAB) to address the inordinate influence of stakeholders in Congressional decisions over Medicare. This group of 15 nonpartisan experts is responsible for developing payment and related Medicare policy changes to assure that Medicare spending does not exceed budget targets tied to economic growth. Although now the law, the IPAB may never be formed because the Senate is unlikely to find 60 votes required to confirm IPAB members (unless the election brings unforeseen changes in the makeup of the Senate). Politics may again triumph policy. The payment approaches that need to evolve from “volume” to “value,” remain vexing. The Centre for Medicare & Medicaid Innovation charged with developing the pilot programs that may result in a reformed delivery system, have no pilots that focus on developing alternative models to reimburse physicianservices.5, 6
And the “invisible problem” of physician shortage! While there is growing bipartisan appreciation that the primary care workforce is insufficient to handle increasing demand for primary care services, the problem has not been fully addressed. The Association of American Medical Colleges estimates that in 2015 the country will have 62,900 fewer doctors than needed and those numbers will more than double by 2025. 7, 8
In coming months the States may become the battleground for implementing the “health exchanges”. By 2014 States are required to establish American Health Benefits exchanges and small business health operations program (SHOP) exchanges. These exchanges called in short, “health exchanges”, are basically subsidized market places with tax credits for consumers to shop for their health insurance at very competitive rates. Individuals who will not be eligible for Medicaid and with income of up to 400% of the Federal poverty level will have access to these health exchanges to purchase insurance. Such subsidies and tax credits will also be available to businesses with less than 100 employees. 9
It is also becoming apparent that the financial burden of the legislation will be significantly higher than initially estimated.For example the non-partisan Congressional Budget Office (CBO) estimates that 80 % of Americans who will face penalty for lack of health insurance under the ‘individual mandate’ would be those with yearly income of $ 55, 250 (for individuals) and $ 115, 250 (for couples). This is in contrast to the statements of President Obama who continues to pledge that he will not raise taxes on individuals making less than $ 200,000 and couples making less than $ 250, 000. And the Republican side of the Senate Budget Committee estimates that Obamacare will cost $2.6 trillion dollars in its first real decade since the bill does not fully go into effect until 2014. 10
Fortunately for President Obama, the prestigious Institute of Medicine released a report,last month on September 6, confirming what has been suggested by him and others, that the US health care system wastes almost 30 % ($750 Billion) each year on unnecessary procedures, fraud and waste. Therefore, the administration has redoubled its efforts to check the waste and fraud in order to pay for the cost of ACA. However, the real battle will begin as soon as the Congress reconvenes in January 2013. They are immediately faced with dealing with “Sequestration”. Originally a legal term referring to the act of valuable property being locked away for safe keeping by an agent of the court, the term has been adapted by Congress in 1985 for fiscal discipline. Under this rule, an amount of money equal to the difference between the cap set in the Budget Resolution and the amount actually appropriated is "sequestered" by the department of Treasury and not handed over to the departments it may have been appropriated originally by the Congress. The balanced budget act of 2011established a Congressional task force (called ‘Super Committee’) that was charged to make recommendations to cut the US Budget deficit by $ 1.5 trillion by November 23, 2011. Failure to do so would automatically trigger “Sequestration”. On November 21, 2011, the committee issued a statement that it had failed to reach agreement. This failure is viewed by most as a triumph of political ideology over genuine leadership. But the prospect of ‘sequestration’ has come to be seen so catastrophic that key members of Congress and the Presidentare expected to abandon brinkmanship and come to an agreement in early 2012.
So, as the drama and the debate continue vigorously in the days leading up to the November 6 elections, it is clear that “Obamacare” will continue to divide the US congress and the country. Irrespective of the party that will control the Congress and who becomes the next President of the US, “Obamacare” is here to stay. And, no matter how hard the Republican Party may try, they will face a monumental task in reversing the course of the history. There will be bickering, name calling, finger pointing and horse trading. But, the warring factions will realize that the escalating costs and complexities of the health care system demand that the legislators and the President come together to find real solutions to keep the American health care system as the best in the world. The real challenges will remain the same no matter who is elected as President: to stem the unsustainable tide of national health expenditure as percentage of the gross national product (rising from 7.2 % in 1970 to over 17 % in 2010), rapidly increasing number of Americans without health insurance (approaching almost 50 million), exploding national debt and, more immediately, the looming threats from sequestration.
Alopecia areata is a non-scarring autoimmune, inflammatory hair loss affecting the scalp and/or body. Although the etiopathogenesis of alopecia areata is still unknown, the most widely accepted hypothesis is that it is a T-cell mediated autoimmune condition that occurs in genetically predisposed individuals. The term ‘alopecia areata’ was first used for this disorder by Savages1.Alopecia areata has a reported incidence of 0.1-0.2%, with a life-time risk of 1.7%2-4. The disease can begin at any age, but the peak incidence is between 20 and 50 years of age5. Both the sexes are equally affected and there is no racial variation reportedClinically, alopecia areata may present as a single well demarcated patch of hair loss, multiple patches, or extensive hair loss in the form of total loss of scalp hair (alopecia totalis) or loss of entire scalp and body hair (alopecia universalis). Histopathologically, alopecia areata is characterized by an increase in the number of catagen and telogen follicles and the presence of perifollicular lymphocytic infiltrate around the anagen phase hair follicles. The condition is thought to be self-limited in majority of cases, but in some the disease has a progressive course and needs active treatment in the form of oral or topical therapeutic options. Progressive alopecia areata is associated with severe social and emotional impact.
Clinical features
Alopecia areata mostly presents as a sudden loss of hair in well demarcated localized areas. The lesion is usually a round or oval flat patch of alopecia with normal skin colour and texture involving the scalp or any other region of the body. The patch of alopecia may be isolated or there may be numerous patches. It usually has a distinctive border where normal hair demarcates the periphery of the lesion. In acute phases, the lesions can be slightly erythematous and oedematous.
The patches of alopecia areata are usually asymptomatic, although several patients may sometimes complain of local paraesthesia, pruritus or pain. The affected hairs undergo an abrupt conversion from anagen to telogen, clinically seen as localized shedding. Characteristic hairs, known as ‘exclamation point hairs’ may be seen within or around the areas of alopecia. The hairs are tapered towards the scalp end with thickening at the distal end. These hairs may also demonstrate deposition of melanin pigment in the distal extremity, also known as Wildy’s sign. Although not absolutely pathognomonic, it strongly suggests the diagnosis of alopecia areata. Hair pull test conducted at the periphery of the lesion may be positively correlated (six or more) with disease activity. In the chronic phases, the test is negative, since the hair is not plucked as easily as in the acute phases.
Another important clinical sign that can aid in the diagnosis is the presence of ‘cadaverous hair’. These are the hairs in which there occurs a fracture of the shaft inside the hair follicle, producing blackened points inside the follicular ostia resembling comedones. In alopecia areata, the hair loss progresses in a circumferential pattern. Often, distinct patches merge to form large patches. Upon regrowth, hairs will often initially lack pigment resulting in blonde or white hairs7.
Extrafollicular involvement in alopecia areata:
a) Nail changes: Nail changes are more frequent in children (12%) than in adults (3.3%)8.The prevalence of nail changes is greater in the more severe forms of alopecia areata such as alopecia universalis and alopecia totalisFinger nails are more commonly involved than the toe nails. Pitting is the most common finding. Other nail changes include koilonychias, onycholysis, onychomadesis, punctuate leukonychia, trachyonychia, Beau’s lines and red lunulae8-11.
b) Ocular changes: Various ocular changes have been reported to occur in alopecia areata. These include focal hypopigmentation of the retina12, lens opacities, posterior subcapsular cataracts13 decrease in visual acuity, Horner’s syndrome, heterochromia of the iris14, miosis and palpebral ptosis.
Treatment of alopecia areata
Treatment of alopecia areata is not mandatory in every affected patient because the condition is benign in majority and spontaneous remission is common. Treatment is mainly directed towards halting the disease activity as there is no evidence that the treatment modalities influence the ultimate natural course of the disease. Treatment modalities are usually tailored as per the extent of hair loss and the patient’s age. Addressing the impressive inflammatory process occurring in alopecia areata, corticosteroids have by far been the most commonly used treatment modality-16Few treatments have been subjected to randomized control trials and except for contact immunotherapy, there is a paucity of published data on their long term outcomes. Currently, new treatments targeting the immune system are being explored for the use in alopecia areata.
Topical treatments
Topical steroids
Intralesional steroid injections
Topical contact sensitizers
Anthralin
Minoxidil
Topical retinoids
Tacrolimus
Systemic treatments
Systemic corticosteroids
Sulfasalazine
Azathioprine
Methotrexate
Oral zinc sulphate
Photo-and photochemotherapy
PUVA
NBUVB
Excimer laser
Miscellaneous and Non-pharmacological treatment
Dermatography, wigs
Hypnotherapy etc
Topical treatment options
Topical corticosteroids:
Several topical corticosteroids with varying levels of efficacy have been used to treat alopecia areata. These include fluocinolone acetonide cream17, fluocinolone scalp gel, betamethasone valerate lotion18, clobetasol propionate ointment19, dexamethasone in a penetration-enhancing vehicle and halcinonide cream20. They are a good option in children because of their painless application and wide safety margin21. Topical corticosteroids are ineffective in alopecia totalis/universalisFolliculitis is a common side effect of corticosteroid treatment, appearing after a few weeks of treatment. Telangiectasia and local atrophy have also been reported. Treatment must be continued for a minimum of 3 months before regrowth can be expected and maintenance therapy often is sometimes necessary.
Intralesional corticosteroids:
Intralesional corticosteroids are widely used in the treatment of alopecia areata. In fact, they are the first-line treatment in localized conditions involving <50% of the scalp22. Hydrocortisone acetate (25mg/ml) and Triamcinolone acetonide (5-10mg/ml) are commonly used. Triamcinolone acetonide is administered usually in the concentration of 5mg/ml using a 0.5 inch long 30-gauge needle in multiple 0.1 ml injections approximately 1 cm apart22-23. The solution is injected in or just beneath the dermis and a maximum of 3 ml on the scalp in one visit is recommended23. Lower concentrations of 2.5mg/ml are used for eyebrows and face. Regrowth usually is seen within 4-6 weeks in responsive patients. Treatments are repeated every 3-6 weeks. Skin atrophy at the sites of injection is a common side effect, particularly if triamcinolone is used, but this usually resolves after a few months. Repeated injections at the same site or the use of higher concentrations of triamcinolone should be avoided as this may lead to prolonged skin atrophyPain limits the practicality of this treatment method in children who are less than 10 years of age. Severe cases of alopecia areata, alopecia totalis, alopecia universalis as well as rapidly progressive alopecia areata respond poorly to this form of treatment25.
Anthralin:
Dithranol (anthralin) or other irritants have been used in the treatment of alopecia areata. The exact mechanism of action is unknown, but is believed to be through immunosuppressant and anti-inflammatory properties with the generation of free radicals. It is used at concentrations ranging from 0.5 to 1 % for 20-30 minutes after which the scalp should be washed with shampoos in order to avoid excessive irritant effects. The applications are made initially every other day and later on daily. Adverse effects include pruritus, erythema, scaling, staining of treated skin and fabrics, folliculitis, and regional lymphadenopathy26-27. In an open study, 25% patients with severe alopecia areata were shown to respond positively to local applications of 0.5-1% anthralinMore placebo control studies are needed to justify the use of anthralin in alopecia areata.
Minoxidil:
Minoxidil appears to be effective in the treatment of alopecia areata. It’s mechanism of action has yet to be determined, but it is known to stimulate DNA synthesis in hair follicles and has a direct action on the proliferation and differentiation of the keratinocytes28. In one clinical study, hair growth was demonstrated in 38% and 81% of patients treated with 1% and 5% minoxidil respectively. Thus 5% minoxidil solution is usually recommended as a treatment option in alopecia areata. No more than 25 drops are applied twice per day regardless of the extent of the affected area. Initial regrowth can be seen within 3 months, but continued application is needed to achieve cosmetically acceptable regrowth. Minoxidil has also been studied in combination with anthralin29, topical betamethasone propionate30 and prednisolone31. Minoxidil is of little benefit to patients of severe alopecia areata, alopecia totalis or alopecia universalisThe possible side effects from minoxidil are allergic and irritant contact dermatitis and hypertrichosis which is usually reversible with the interruption of the treatment.
Topical immunotherapy:
Topical immunotherapy is the best documented treatment so far for severe and refractory cases of alopecia areata. Topical immunotherapy is defined as the induction and periodic elicitation of allergic contact dermatitis by applying a potent contact allergen33. In 1965, the alkylating agent triethyleneimino benzoquinone was the first topical sensitizer used to treat cutaneous disease, but it was abandoned on account of its mutagenic potential. Later nitrogen mustard, poison ivy, nickel, formalin, and primin were tried, mainly as topical immunotherapy, for alopecia areata and warts. Contact immunotherapy was introduced in 1976, by Rosenberge and Drake. Later, potent contact allergens namely dinitrochlorobenzene (DNCB) and diphenylcyclopropenone (DPCP) replaced the allergens that were used earlier33. DNCB is mutagenic against Salmonella tymphimurium in the Ames test and is no longer usedNeither SADBE, nor DPCP are mutagenic. DPCP is more stable in solution and is usually the agent of choice.
Mechanism of action: Topical immunotherapy acts by varied mechanisms of action. The most important mechanism is a decrease in CD4 to CD8 lymphocyte ratio which changes from 4:1 to 1:1 after contact immunotherapy. A decrease in the intra-bulbar CD6 lymphocytes and Langerhan cells is also noted. Happle et al, proposed the concept of ‘antigenic competition’, where an allergic reaction generates suppressor T cells that non-specifically inhibit the autoimmune reaction against a hair follicle constituent. Expression of class I and III MHC molecules, which are normally increased in areas affected by alopecia areata disappear after topical immunotherapy treatment34.A ‘cytokine inhibitor’ theory has also been postulated34.
Method of sensitization: The protocol for contact immunotherapy was first described by Happle et al in 1983 The scalp is the usual sensitization site. For the initial sensitization a cotton-tipped applicator saturated with 2% DPCP in acetone is applied to a small area. Patients are advised to avoid washing the area and protect it from sunlight for 48 hours. After 2 weeks 0.001% solution of DPCP is applied on the scalp and then the application of contact allergen is repeated weekly with increasing concentrations. The usual concentration of DPCP that ultimately causes mild contact eczema is 0.01-0.1% and this is repeated weekly till a response is seen. An eczematous response indicates that sensitization has taken place. Only 1-2% of the patients fail to sensitize. It is important to remember that DPCP is degraded by light and should thus be stored in the dark and the patient should also wear a wig or hat during the day after application of DPCP. DPCP immunotherapy has even been combined with oral fexofenadine treatment with good effect36.
Evaluation of efficacy: The clinical response after six months of treatment is rated as per the grading system proposed by Mcdonald Hull and Norris37:
Grade 1- Regrowth of vellus hair.
Grade 2- Regrowth of sparse pigmented terminal hair.
Grade 3- Regrowth of terminal hair with patches of alopecia.
Grade 4- Regrowth of terminal hair on scalp.
If no regrowth is observed within six months of treatment, the patient is considered to be a non-responder. Evaluation of plucked hair is done using light microscopy, for evaluation of anagen/telogen ratio.
A review of most of the published studies of contact immunotherapy concluded that 50-60% of patients achieve a worthwhile response but the range of response rates was very wide (9-87%)Patients with extensive hair loss are less likely to respond. Other reported poor prognostic factors include the presence of nail changes, early onset disease and a positive family history39.
Topical immunotherapy can lead to certain side effects such as persistent dermatitis, painfull cervical lymphadenopathy, generalized eczema, blistering, contact leukoderma, and urticarial reaction. Systemic manifestations such as fever, arthralgia and yellowish discoloration of hair are noted more often with DNCB.
In poor responders to DPCP, squaric acid dibutylester (SADBE) can be tried as a contact sensitizer. The method of application is the same as with DPCP but the applications are done once or twice weekly40.
Good care should be taken to avoid contact with the allergen by handlers, including pharmacy and nursing staff. Those applying the antigen should wear gloves and aprons. There is no available data on the safety of contact immunotherapy during pregnancy and it should not be used in pregnant women or in women intending to become pregnant.
Tacrolimus:
Tacrolimus is a topical calcineurin inhibitor that inhibits transcription following T-cell activation of several cytokines including IL-2, IFN-gamma and TNF-α. Yamamoto et al reported in their findings that tacrolimus stimulated hair growth in mice41, although subsequent studies have shown conflicting resultsRecently, Price et al reported an 11-patient study in which none of the patients had terminal hair growth in response to tacrolimus ointment 0.1 % applied twice daily for 24 weeks43.
Topical garlic
Garlic is a very commonly used home remedy in the treatment of alopecia areata in India and even in the rest of the world. One study analyzed the effect of a combination of topical garlic gel and betamethasone valerate ointment in alopecia areata in a double-blind study. The study found the combination useful in majority of the patients with a statistically significant difference between the treatment and control groups44.
Topical retinoids:
Among topical retinoids, tretinoin and bexarotene have been tried in alopecia areata with mixed results-46Irritation of the skin is a very common side effect and the efficacy is doubtful in the absence of double-blind randomized trials.
Prostaglandin analogs:
The propensity of certain prostaglandin analogues used as anti-glaucoma eye drops to cause hypertrichosis has been employed in the treatment of alopecia areata. These prostaglandin analogues include Latanoprost and Bimatoprost and they are used in the treatment of alopecia areata involving the eyelashes-48However, the results obtained with these drugs have not been really encouraging49.
Systemic treatments
Systemic treatments, as a rule, are used only in progressive forms of alopecia areata and going by the immune nature of the disease, majority of these treatment options are immunosuppressants or immunomodulators in nature.
Systemic corticosteroids:
The use of systemic corticosteroids for the treatment of alopecia areata is under much debate. Some authors support a beneficial role of systemic steroids on halting the progression of alopecia areata, but many others have had poor results with this form of therapy. The suggested dosages are 0.5-1mg/kg/day for adults and 0.1-1 mg/kg/day for children50. Treatment course ranges from 1-6 months, but prolonged courses should be avoided to prevent the side effects of corticosteroids. Side effects profile of corticosteroids in conjunction with the long-term treatment requirements and high relapse rates make systemic corticosteroids a more limited option. In addition to the daily oral administration of corticosteroids, there are several reports of high-dose pulsed corticosteroid treatments employing different oral and intravenous regimens51-53. Many of these regimens have been tried in alopecia areata with encouraging results but the majority of these studies have been non-blind open studies. One such pulsed administration employs a high dose oral corticosteroid on two consecutive days every week with a gap of 5 days between the two pulses. This modality of treatment is known as oral minipulse therapy (OMP) and it has been tried in many skin diseases in addition to alopecia areata like vitligo54-55 and lichen planusSome open label studies on corticosteroid OMP therapy have reported encouraging results in alopecia areata53.
Sulfasalazine:
Because of its immunomodulatory and immunosuppressive actions, sulfasalazine has shown good hair regrowth in the treatment of alopecia areata. The drug is administered orally usually as enteric coated tablets to minimize the gastrointestinal side effects. The treatment is started at a lower dose, usually in the range of 500 mg twice daily and then the dose is gradually increased to 1 g three times a dayAdverse effects include gastrointestinal distress, liver toxicity and haemotological side effects. Sulfasalazine helps in alopecia areata because it causes inhibition of T cell proliferation, and natural killer cell activity and also inhibits antibody production. It also inhibits the secretion of interleukin (IL)-2, IL-1, TNF- and IFN-gamma and even IL-667.
A number of clinical studies have documented a positive effect of sulfasalazine in alopecia areata. In one clinical study, 23% patients showed a really good response with satisfactory hair growth after sulfasalazine therapyOther studies have also shown a beneficial effect of this treatment option in resistant cases of alopecia areata66,69.
Azathioprine:
Azathioprine, being an immunosuppressive agent has also been tried in alopecia areata. The drug is used in many cutaneous disorders owing to its effect on circulating lymphocytes as well as Langerhan cells. In a limited study on 20 patients hair regrowth was demonstrated in about half of the patients with a dosage regimen of 2g/day70.
Cyclosporine:
This drug has proven effective in the treatment of alopecia areata because of its immunosuppressive and hypertrichotic properties. The side effect profile and high rate of recurrence render the drug a poor choice for the use in alopecia areata. So the drug is to be attempted only in severe forms of alopecia areata not responding to treatment71.
Methotrexate:
Methotrexate either alone or in combination with prednisolone has been used in the treatment of alopecia areata in various studies with variable success rates72.
Oral zinc sulphate
Serum zinc levels have been found to be lower in patients with alopecia areata than in control populationIn a study on 15 patients, hair regrowth was observed in 9 patients (67%) after oral zinc gluconate administration74.
Biological agents:
Tumour necrosis factor inhibitors such as Adalimumab, Infliximab and Etanercept have been tried in alopecia areata, but the results have not been encouraging-76Clinical trials conducted till now have failed to demonstrate the efficacy of any biological agent in alopecia areata.
Photo-and photochemotherapy
Photochemotherapy:
Several uncontrolled studies regarding PUVA therapy for the treatment of alopecia areata exist. All types of PUVA (oral PUVA, topical PUVA, local or whole body UVA irradiation) have been used with success rates of up to 60-65%57-59. The mechanism of action is considered to be the interference in the presentation of follicular antigens to T-lymphocytes by depletion of the Langerhan cells. The relapse rate following treatment is high, sometimes demanding repeated treatments for a prolonged period with implications for carcinogenic risks60. To mitigate the side effects of systemic psoralens, PUVA-turban therapy is used for alopecia areata involving the scalp. In this form of photochemotherapy, very dilute solutions of 8-methoxy psoralen are applied on the scalp by utilizing a cotton towel as a turban. The patient’s scalp is exposed to UVA after keeping the ‘turban’ in contact with the scalp for about 20 minutesThe efficacy of this form of PUVA therapy has been seen to be about 70%61.
Phototherapy
Although narrowband UVB is among the most effective treatment options in a number of immune mediated skin diseases, the same efficacy has not been found in alopecia areata. Properly designed randomized trials are needed to elucidate whether NBUVB has any role in the management of alopecia areata62-63.
Excimer laser and excimer light
Excimer laser and excimer light are two more recent additions to the phototherapeutic armamentarium for many skin and hair disorders. While the main use of these phototherapeutic modalities remains to be psoriasis and vitiligo, their immunomodulatory effect can be made use of in many other skin disorders. Some clinical studies have documented the efficacy of excimer laser and excimer light in alopecia areata64-65. In one such study, 41.5% patches were shown to respond to excimer laser therapy administered over 12 weeks64. Another study on childhood alopecia areata found regrowth in 60% lesions after a treatment period of 12 weeksThe treatment is well tolerated with erythema of the skin as the only adverse effect reported.
Miscellaneous therapies
Various non-conventional therapeutic agents have been used in alopecia areata with some degrees of success. These include fractional Er-Glass laser77, topical azelaic acid78, topical onion juice79, topical 5-fluorouracil ointment80 and photodynamic therapyThe efficacy and safety of these therapeutic agents need to be confirmed in large-scale, double-blind, placebo-controlled trials before they can be recommended for treatment of alopecia areata.
Non-pharmacological methods
Cosmetic treatments for patients with alopecia areata include the following:
a) Dermatography: It has been used to camouflage eyebrows of patients with alopecia areata. In this treatment tiny pigment dots of pigment are used on the skin on the region of the eyebrows to mask the underlying alopecia81.
b) Wigs or Hair pieces: These are useful for patients with extensive disease and allow them to carry on their usual social life.
Conclusion:
Alopecia areata is now regarded as an autoimmune disease involving the cellular immunity through the CD8 lymphocytes that act on follicular antigens. The pathogenesis of alopecia areata is being unravelled with various animal and human studies.
The localized forms often heal spontaneously or respond to simple treatments such as topical or intralesional corticosteroids. The severe forms have a reserved prognosis and are difficult to treat. In these cases the best results are achieved by topical immunotherapy technique.
Isaac Asimov famously said: ‘The only constant is change.’ (Cited in Hartung, 2004).
So why is it so difficult for most of us to understand, manage, or embrace change?
Coping with change can be challenging for many and, depending on the change and what the impact or outcome of the change means to the individual, will depend upon how well they embrace and accept it. Should a person be fearful of change then it is natural that they will attempt to resist it which in turn can cause high levels of stress and anxiety.
Understanding how we typically react to change also helps us to cope better and manage change. The Kubler-Ross (2009) Model of Change is perhaps one of the best known and most applied models within clinical environments (her original work being around the five stages of grief) which is now also applied to businesses and organisations when looking at changes in the work place such as loss or change of job.
The five stages she refers to are:
Denial
Anger
Bargaining
Depression
Acceptance
A common example used to explain this model is to understand how we would typically respond to an unexpected change such as a dead car battery.
The dead car battery
Just imagine it is a cold winter day and you are dashing to get to work already running late…
You jump into the car, place the key in the ignition and turn it on.
Nothing happens, the battery is dead.
Applying the Kubler Ross Model to this situation, this is how a person may typically react:
Denial - This cannot be happening! Try again. And again! Check the other things in the car are working such as the lights and radio. Try again but still nothing.
Anger - Arrrrgh you stupid car!!! I’m sick of this car!! Why is this happening today of all days!! Slamming a hand against the steering wheel.
Bargaining - (realising that it really isn’t going to start and that you're going to be late for work)..., Oh please car, if you will just start one more time I promise I'll buy you a brand new battery and keep you clean and tidy. Please just start this one time.
Depression - Oh no! What am I going to do? I'm going to be late for work. I give up. I don't really care any more. What's the use?
Acceptance - Right I need to do something. It is not going to start. I need to call the breakdown service and ring into work.
The above example is a simple example yet I’m sure most of us have experienced it or something similar quite often. If you apply this to a situation where the stakes are far higher such as a sudden loss or change of a job, bereavement, house, relationship etc which may impact upon so many things including stability of finances, family, health and other forms of security, then you may be able to see the harsh effect this could have on an individual during this time.
Often individuals add to their stress by expecting themselves to be able to cope with such events. It is important to understand it is not about strength or weakness but about human nature to react by demonstrating the signs of loss and grief. Organisations, managers and individuals need to be understanding and supportive when situations like this happen.
Another way of understanding and coping with change is to consider what goes on in the mind of the individual at the time of the change and what it ‘means’ to them. Some people see risk and uncertainty as exciting and embrace change (depending on the change), whereas others can be fearful of any change, even those perceived to be minor changes, as for them any change is seen as a risk and takes them out of their comfort zone.
The comfort zone
Your comfort zone is where you are fully able, competent and comfortable. The job that you can do with your eyes shut or routines of life where you know exactly what you are doing. You may feel slightly challenged now and then, but there’s nothing you cannot easily handle.
When invited to step outside their comfort zone – or if they’re pushed outside of it - many people react with resistance. This is because of the human fear of failure which, when you look into it more deeply, comes from a desire to be accepted, liked and even loved. When most people ‘fail’ they feel embarrassed, ashamed, silly or stupid because they feel they can’t or couldn’t do whatever it was they tried.
So it’s understandable if at work, or any area of life where there is change, people react with resistance. Change is the unknown, and if you don’t know whether you can do something – especially if you have a ‘Be Perfect’ driver – you could have fears over whether you can do it, can be a success or even cope. Everyday changes such as new computers or telephone systems, new staff, new jobs, new routines and procedures, new management, merging of departments, sections or whole companies or, on a personal level, exams, weddings, divorce, births, deaths, moving house and so on, are all high on the list of stressors due to change.
How big is your zone?
Are you resistant to change? If you are, you’re causing yourself stress. Imagine what size a child’s comfort zone would be compared to an adult’s. Children do not have inhibitions and fears; it’s only as we grow older that we learn to feel fear, that we learn what embarrassment is and how to feel silly or stupid – that is, we learn to have an ego. This restricts our ability to have the freedom to learn, grow and be open to change, as we are nervous about asking questions for fear of looking silly, or trying new things for fear of failure, and we avoid doing anything that may cause us to feel embarrassed.
By being more fluid and open to change, accepting any fear and dealing with it effectively, you would not only grow your confidence and self-esteem, but you will be free to develop your life with more happiness and less stress.
By looking at change differently (for example, recognising that change can also be a good thing; focusing on the possible positives from a situation rather than being quick to look at the negatives from a point of fear and therefore resistance) stress can be greatly reduced.
Choose to flow with change rather than resist; choose to step out of your comfort zone and grow the size of your comfort zone daily. Aim to have a comfort zone the size of a child’s where nothing can faze or worry you, and you will notice a huge difference to the amount of stress you have in your life.
‘The greatest discovery of my generation is that a human being can change their life by altering their attitude of mind.’ William James (cited in Maxwell, 2007).
Remember – the only failure is not trying again. If we fail at something at least we know what NOT to do next time!
Identifying your zones and being rational
Following are three simple exercises you can complete to help you to gain a rational perspective on understanding how you cope with change and also being solution focused when embracing change.
The zones of change help us to understand the different levels of comfort or ‘risk’ and where changes may sit in terms of their percieved meanings to the individual.
Zones of change
Exercise 1
Think back to a significant change in your life or work (something from the past).
What were your perceived risks at the time?
…………………………………………………………………………………
…………………………………………………………………………………
What did you lose?
…………………………………………………………………………………
…………………………………………………………………………………
What did you gain?
…………………………………………………………………………………
…………………………………………………………………………………
This exercise demonstrates that our ‘perceived risks’ at the time of a change were often far different than the reality of how the change occurred. It is also common for an individual to notice that their ‘gains’ can be larger than their ‘losses’ (time can play a factor in this too, often a change can seem a disaster at the time but over time a person can look back and be glad it happened in comparison to how their life is now.)
Exercise 2
Think of a change that you are currently undergoing.
What aspects of the change are in your ‘comfort zone’?
…………………………………………………………………………………
…………………………………………………………………………………
What aspects are in your ‘risk zone’?
…………………………………………………………………………………
…………………………………………………………………………………
What aspects are in your ‘high risk zone’?
…………………………………………………………………………………
…………………………………………………………………………………
What do you need to make the ‘high risk’ into ‘risk’ and the ‘risk’ into ‘comfort’?
…………………………………………………………………………………
…………………………………………………………………………………
This exercise is excellent for considering a current change and how it may affect a person.
Actually listing in categories the level of ‘risk,’ or even drawing the zones on a piece of paper and writing in each change in the place on the zone where the person believes it sits, will give a rational perspective.
Once all the ‘risks’ are highlighted then that is the time to minimize ‘risk’ and find solutions for the individual to cope or manage that change. This is good for action planning and allowing a person to take control to embrace a change rather than being reactive once the change has occurred.
Exercise 3
Think of a life or work change which is going to occur in the future.
Blockers
What I’d be sorry to lose.
…………………………………………………………………………………
…………………………………………………………………………………
My fears and concerns.
…………………………………………………………………………………
…………………………………………………………………………………
Drivers
Benefits of the change.
…………………………………………………………………………………
…………………………………………………………………………………
What I’d be glad to leave behind.
…………………………………………………………………………………
…………………………………………………………………………………
Answering these questions assists a person to determine how much resistance they may feel/have towards a change. Listing potential blockers will identify fears and concerns of the change as well as the levels of risk and loss. Listing drivers will encourage the individual to consider the benefits of the change, the gains, and that change can also be a good thing.
Typically, whichever list is the longest or has the most meaning/impact will be the strongest for that person. If this is the blockers they will resist the change and cause themselves pressure and stress. Therefore addressing the zones of change and looking for ways to reduce risk would be a good strategy in action planning to manage the change well. Should the drivers be the strongest for the person then they are likely to embrace the change more readily although they may still need to address their thoughts and rationale for any blockers listed.
Change tips:
Embrace change, as if you don’t accept it someone will push you into it.
Take every opportunity to grow your comfort zone.
Have the attitude that there is no failure and only learning and development – when we ‘fail’ we know what NOT to do next time.
The worst rarely happens, so why waste energy focusing on it and enforcing irrational fears?
Change CAN be a good thing.
There is always a solution, it may take time for you to see it, but if you look, you will find it.
Case Presentation: Reflex Anoxic Seizures and Anaesthesia
Reflex anoxic seizures (‘RAS’) may present, as potentially life threatening events, but these are often preventable. They are most common in preschool children (but can occur in any age) and more so in females. As a cause of seizures they are not rare; one study estimated a frequency of 8 in 1000 preschool children1, but they are often misdiagnosed. The pathophysiology of RAS is vagally mediated – a noxious stimulus causes a supranormal vagal discharge resulting in bradycardia and then astystole2. This then results in cerebral under perfusion and hypoxia. During this time the patient is often noted to become very pale with dusky lips, initially flaccid and then tonic with rigid extension and clenched jaws. They may then have a generalised convulsion, often with rolling eyes and urinary incontinence. The patient spontaneously recovers (the whole episode lasting around 30 to 60 seconds) and will feel somnolent, often remaining pale for a while.
From this description it can be easily understood how such an event can be misdiagnosed as epilepsy; however it is not associated with the uncontrolled neuronal discharge of epilepsy and if monitored by EEG this is absent2. It may also be mistaken as breath-holding attacks (where intra-thoracic pressure restricts cerebral perfusion) or Stokes- Adams attacks (where there is abnormal electrical function of the heart).
The noxious stimuli responsible can be many different things. Ocular pressure2, venepuncture3, anaesthetics4, accidental trauma and fear have all been implicated. If these stimuli cannot be prevented, management is normally just supportive (positioning, protection from trauma, oxygen) and allowing the fit to self-resolve[U1] . Further management can involve atropine5 (either acutely of preventatively), maintenance anticonvulsants6 (though these often just stop the fitting but not the syncope) and even pacemaker [U2] insertion7.
The case we encountered was that of a 20 year old female student, presenting for a planned day case removal of a molar tooth. She was otherwise fit and well with no other past medical history, only taking the combined oral contraceptive pill. Her history with RAS started at age 1, when she was admitted to hospital following two seizures. The seizures occurred every few months and she was provisionally diagnosed as suffering from epilepsy, with prophylactic treatment started. However, as she grew older she was able to describe how the attacks were not associated with a preceding aura, but rather an unpleasant stimulus (such as accidental injury). A new diagnosis of RAS was made and the antiepileptics were stopped without the seizures becoming more frequent. As she entered late childhood and adolescence the frequency of the seizures became less, but (atypically) they did not stop entirely. On preassessment she reported being seizure free for just over a year and was anxious that today could precipitate another.
After consideration, we decided to proceed with anaesthesia with the following measures. The patient was kept calm by having a clear explanation of what to expect before coming to theatre, and then was reassured by an affable theatre team (who had been informed of her condition). Atropine was drawn up and available if vagal over stimulation occurred, as was suxamethonium in case of emergency airway intervention. For cannulation, cold spray was used along with distraction. Induction was with propofol (under full monitoring) and anaesthesia was maintained with sevoflurane/nitrous oxide via LMA. To prevent pain as a potential trigger, fentanyl (at induction) and paracetamol (after induction) were given and local anaesthetic (lidocaine) was administered before any surgery. Emergence was kept as smooth as possible by removing the LMA prior to any gagging and coughing and manually supporting the airway until she was awake.
With these measures the procedure was uneventful and the patient could be discharged home as planned. We hope this case report will help improve awareness and understanding of RAS, and the steps that can be taken peri-operatively to help ensure safe anaesthesia.
Foreign body ingestion is a common occurrence, especially in children, alcoholics, mentally handicapped and edentulous people wearing dentures. However, majority of the individuals pass these objects without any complications.1 Most foreign bodies pass readily into the stomach and travel the remainder of the gastrointestinal tract without difficulty; nevertheless, the experience is traumatic for the patient, the parents, and the physician, who must await the removal or the ultimate passage of the foreign body.2 The alimentary canal is remarkably resistant to perforation: 80% of ingested objects pass through the gastrointestinal tract without complications. 3 About 20% of ingested foreign bodies fail to pass through the entire gastrointestinal tract.4 Any foreign body that remains in the tract may cause obstruction, perforation or hemorrhage, and fistula formation. Less than 1% result in perforations from the mouth to the anus and those are mostly caused by sharp objects and erosions. 5, 18 Of these sharp objects, chicken bones and fish bones account for half of the reported perforations. The most common sites of perforation are the ileo-ceacal junction and sigmoid colon.3
Materials and Methods
This study, “Gastrointestinal tract perforations due to foreign bodies; a review of 21 casesover a ten year period” was carried out in the Department of General Surgery at the Sher-i-Kashmir Institute of Medical Sciences Srinagar (SKIMS), a tertiary care hospital in North India, from January 2002 to December 2011. A total of 21 consecutive patients who underwent surgery for an ingested foreign body perforation of the GI tract over a period of ten years were retrospectively reviewed. Computer database and extensive case note search of patient’s personal data including age, sex, residence, presenting complaints with special stress on clinical examination findings was done. The type and nature of the foreign objects, mode of entry into the gastrointestinal tract, preoperative diagnosis, perforation site, and treatment received were recorded. The complications arising due to perforation of GIT because of the foreign body ingestion and complications arising due to specific treatment received were noted. Important findings on various laboratory tests, including a complete blood count, erythrocyte sedimentation rate, [pre-op/post-op/follow up], blood cultures, and serum chemistry, chest and abdominal X-rays were penned down. Special efforts were made to identify the predisposing factors for ingestion of foreign bodies including edentulous patients with dentures, psychosis, extremes of age and hurried eating habits. Clinical, laboratory and radiological findings, treatment modalities, operative findings and therapeutic outcomes were summarized. Data collected as such was described as mean and percentage.
I/V Antibiotics ( Ceftriaxone + Metronidazole ) were given in the emergency room and changed to specific therapy as per the culture sensitivity postoperatively.
Results
The average follow up duration was 13 months (range 7 – 19 months). There were 14 male(66.66%) and 7 female (33.33%) patients ranging in age from 7 years to 82 years with a median age of 65 yrs at the time of diagnosis . The most frequently ingested objects were dietary foreign body (n = 17). Four patients had ingested objects like toothpicks (n =2) and metallic staples (n=2) {as shown in figure 1}. Among the dietary foreign bodies fish bone was found in 7(33.3%) and chicken bone in 10(47%) {as shown in figure 2} . All the patients described their ingestion as accidental and involuntary. A definitive preoperative history of foreign body ingestion was obtained in 4(19.04%) patients and an additional 9(42.8%) patients admitted ingestion of foreign body in the post operative period. Of these 13 patients the average duration between ingestion of foreign body and presentation was 9.3 days. Remaining 8 (38.09%) patients did not recall any history of foreign body ingestion; dietary or otherwise. In terms of impaction and perforation of ingested foreign body, ileum was the commonest site with 14(66.66%) patients showing perforation near the distal portions of the ileum followed by sigmoid colon in 5(23.8%). Jejunal perforation was seen in 2(9.5%) patients.
Fig 1: X ray abdomen AP view showing ingested metallic pin
Fig 2: Intra operative picture showing perforation of small gut due to chicken bone
All our patients presented with acute abdomen and were admitted first in emergency department. Since majority of patients did not give any specific history of foreign body ingestion, they were managed as cases of acute abdomen with urgency and level of care varying according to the condition of patients. Eight patients presented with free air in the peritoneum and air under the right side of diaphragm. The most common preoperative diagnoses were acute abdomen of uncertain origin: 12 (57.14%); acute diverticulitis:5 (23.8%) and acute appendicitis: 4 (19.04%).
Table 1: Showing demographic profile, site of perforation, etiology, presentation and management.
S No
Age
Sex
Site
Foreign Body
Presentation & Pre Op Diagnosis
Procedure Performed
1
78
Male
40 cm from ileo-caecal valve
Fish bone
Acute abdomen, peritonitis
Removal of foreign body and repair
2
65
Female
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
3
80
Male
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
4
43
Male
Jejunum
Tooth pick
Acute abdomen, peritonitis
Removal of foreign body and repair
5
10
Male
10 cm from ileo-caecal valve
Metallic staple
Acute abdomen, appendicitis
Removal of foreign body and repair
6
72
Female
Jejunum
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
7
65
Male
20 cm from ileo-caecal valve
Fish bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
8
59
Male
Sigmoid colon
Chicken bone
Acute abdomen, diverticulitis
Removal of foreign body and repair
9
65
Female
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Removal of foreign body and repair
10
49
Female
40 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Removal of foreign body and repair
11
7
Male
Sigmoid colon
Metallic staple
Acute abdomen, diverticulitis
Removal of foreign body and repair
12
78
Female
15 cm from ileo-caecal valve
Fish bone
Acute abdomen, appendicitis
Resection of the perforated distal ileum and ileum stoma
13
72
Male
15 cm from ileo-caecal valve
Fish bone
Acute abdomen, appendicitis
Resection of the perforated distal ileum and ileum stoma
14
56
Male
20 cm from ileo-caecal valve
Tooth pick
Acute abdomen, appendicitis
Resection of the perforated distal ileum and ileum stoma
15
65
Male
Sigmoid colon
Fish bone
Acute abdomen, diverticulitis
Removal of foreign body and repair
16
63
Male
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
17
82
Female
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Removal of foreign body and repair
18
55
Female
Sigmoid colon
Fish bone
Hematochizia acute abdomen, diverticulitis
Removal of foreign body and repair
19
56
Male
20 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
20
69
Male
Sigmoid colon
Fish bone
Acute abdomen, diverticulitis
Removal of foreign body and repair
21
71
Male
40 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
All the patients underwent an emergency celiotomy and confirmation of foreign body induced perforation was possible in all the 21 patients .Patients with a suspected appendicitis were explored via classical grid iron incision and rest via midline incision. Varying degrees of abdominal contamination was present in all the patients. Out of the 21 patients 11(52.38%) underwent removal of foreign body and primary repair of their perforations after minimal debridement. Intestinal resection with stoma formation (resection of the perforated ileum and ileum stoma) was done in 10 (47.6%) of the 21 patients as shown in Table 1. Take down of stoma was done at a later date. Three (14.28%) patients developed incisional superficial surgical site infection which responded to local treatment. Two (9.5%) patients died in the postoperative period due to sepsis. One patient (Patient no. 3 in table 1) who was a diabetic on Insulin, Chronic obstructive pulmonary disease and Hypertension died on 3rd postoperative day in surgical Intensive care unit due to severe sepsis. Another patient, (Patient no. 12 in table 1 ) an elderly female with no co-morbid illness developed severe sepsis due to Pseudomonas aeruginosa, died on 4th postoperative day. She was managed at a peripheral primary care center for first 3 days for her vague abdominal pain with minimal signs. All the other patients had an uneventful recovery and were discharged home between 6-14th postoperative day.
Discussion:
Foreign bodies such as dentures, fish bones, chicken bones, toothpicks and cocktail sticks have been known to cause bowel perforation6. Perforation commonly occurs at the point of acute angulation and narrowing. 7, 8 The risk of perforation is related to the length and the sharpness of the object.9 The length of the foreign body is also a risk factor for obstruction, particularly in children under 2 years of age because they have considerable difficulty in passing objects longer than 5 cm through the duodenal loop into the jejunum. In infants, foreign bodies 2 or 3 cm in length may also become impacted in the duodenum.10 The most common sites of perforation are the ileo-ceacal junction and sigmoid colon. Other potential sites are the duodeno-jejunal flexure, appendix, colonic flexure, diverticulae and the anal sphincter.3 Colonic diverticulitis or previously unsuspected colon carcinoma have been reported as secondary findings in cases of sigmoid perforation caused by chicken bones.11,12 Even colovesical or colorectal fistulas have been reported as being caused by ingested chicken bones. 13,14 .In our study ileum was the most common site with 14 patients showing perforation near the distal portions of the ileum followed by sigmoid colon. Jejunal perforation was seen in 2 patients.
The predisposing factors for ingestion and subsequent impaction are dentures causing defective tactile sensation of the palate, sensory defects due to cerebro-vascular accident, previous gastric surgery facilitating the passage of foreign bodies, achlorhydria where the foreign body passes unaltered from the stomach, previous bowel surgery causing stenosis and adhesions and diverticula predisposing to impaction.3 Overeating, rapid eating, or a voracious appetite may be contributing factors for ingesting chicken bones. The mean time from ingestion to perforation is 10.4 days.15 In cases when objects fail to pass the tract in 3 to 4 weeks, reactive fibrinous exudates due to the foreign body may cause adherence to the mucosa, and objects may migrate outside the intestinal lumen to unusual locations such as the hip joint, bladder, liver, and peritoneal cavity.16 The length of time between ingestion and presentation may vary from hours to months and in unusual cases to years, as in the case reported by Yamamoto of an 18 cm chopstick removed from the duodenum of a 71-year-old man, 60 years after ingestion.17 In our study the average duration between ingestion of foreign body and presentation was 9.3 days.
In a proportion of cases, definitive preoperative history of foreign body ingestion is uncertain.18 Small bowel perforations are rarely diagnosed preoperatively because clinical symptoms are usually non-specific and mimic other surgical conditions, such as appendicitis and caecal diverticulitis.19 In our study the most common preoperative diagnoses were acute abdomen of uncertain origin (n =12), acute diverticulitis (n = 5) and acute appendicitis (n = 4). Patients with foreign body perforations in the stomach, duodenum, and large intestine are significantly more likely to be febrile with chronic symptoms with a normal total white blood cell count compared to those with foreign body perforations in the jejunum and ileum.18 Plain radiographs of neck and chest in both anteroposterior and lateral views are required in all cases of suspect foreign body ingestion and perforations in addition to abdominal films. CT scans are more informative especially if radiographs are inconclusive.20 Computerised tomography (CT) scanning and ultrasonography can recognise radiolucent foreign bodies. An ultrasound scan can directly visualize foreign bodies and abscesses due to perforation. The ability to detect a foreign body depends on its constituent materials, dimensions, shape and position.21 Contrast studies with Gastrograffin may be required in excluding or locating the site of impaction of the foreign body as well as determining the level of a perforation. Using contrast is important in identifying and locating foreign bodies if intrinsically non-radiopaque substances, such as wooden checkers or fish and chicken bones are ingested.20 The high performance of computed tomography (CT) or multi-detector-row computed tomography (MDCT) scan of the abdomen in identifying intestinal perforation caused by foreign bodies has been well described by Coulier et al. 22 Although, in some cases imaging findings can be nonspecific, however, the identification of a foreign body with an associated mass or extraluminal collection of gas in patients with clinical signs of peritonitis, mechanical bowel obstruction, or pneumoperitoneum strongly suggests the diagnosis.8,20 Finally, endoscopic examination, especially in the upper gastrointestinal tract, can be useful in diagnosis and management of ingested foreign bodies.
Whenever a diagnosis of peritonitis subsequent to foreign body ingestion is made, an exploratory laparotomy is performed. However, laparoscopically assisted, or complete, laparoscopic approaches have been reported.17,23 The treatment usually involves resection of the bowel, although occasionally repair has been described.8 The most common treatment was simple suture of the defect. 24 Once the foreign body passes the esophagogastric junction into the stomach, it will usually pass through the pylorus25; however, surgical removal is indicated if the foreign body has sharp points or if it remains in one location for more than 4 to 5 days especially in the presence of symptoms. A decision should be based on the nature of the foreign body in those cases, as to whether a corrosive or toxic metal in ingested. 26 Occasionally, objects that reach the colon may be expelled after enema administration. However, stool softeners, cathartics and special diets are of no proven benefit in the management of foreign bodies.7
In traumatic brain injury (TBI) the primary insult to the brain and the secondary insults as a result of systemic complications may result in a multitude of sequelae ranging from subtle neurological deficits to significant morbidity and mortality. As the brain recovers by repair and adaptation, changes become apparent and may result in physical, cognitive and psychosocial dysfunction. Rehabilitation is usually structured to recover physical ability, cognitive and social retraining with the aim of gaining independence in activities of daily living.
Case Report:
A 76 year old male patient was admitted to an intermediate neurorehabilitation unit following a traumatic brain injury(TBI) .He had fallen from a height of 11 feet resulting in intracerebral haemorrhage in the left parietal lobe and a left parietotemopral subarachnoid hemorrhage which was managed conservatively in the neurosurgical unit. He developed recurrent post traumatic seizures in the form of myoclonic jerks for which he was started on antiepileptic drugs (AEDs) sodium valproate ,clobazam and levetiracetam .During his stay in the acute neurorehabilitation unit, he was noted to be confused and wandering with a disrupted sleep wake cycle. Cognitive assessment showed global impairment across all cognitive domains suggesting that cognitive impairment was secondary to TBI with the chaotic sleeping pattern and fatigue having a significant effect on his cognition. He was then transferred to an intermediate neurorehabilitation unit four months post head injury for rehabilitation prior to discharge.
On admission he was confused, and disorientated. His neurological examination was normal except for mild expressive dysphasia. On the first night of his stay in the unit, he did not sleep at all, was restless, agitated and aggressive towards the staff. His initial agitation was attributed to the change of surroundings and general disorientation. However during his first week at the rehabilitation unit it was noted that his sleep wake cycle was completely disrupted .He would have short fragmented naps through the day and would regularly get agitated at night with threatening behaviour towards staff. On admission the Rancho Los Amigos scale† was 4(confused-agitated) and he needed specialized supervision. Despite environmental modification and optimal pharmacotherapy to improve sleep and decrease agitation, the patient still continued to have aggressive outbursts and no identifiable sleep wake pattern. It was noted by the nursing staff that occasionally when very agitated, the patient refused to have his night time medications including all AEDs .On such occasions he was reported to have slept better at night and did not have any daytime naps. All blood investigations were within normal limits except for mild hyponatremia with a normal creatinine clearance and CT head showed changes consistent with previous TBI with no new pathology .A neurology opinion was sought and with a Naranjo adverse drug reaction probability score†† of 7/10, a decision was taken to slowly decrease levetiracetam and wean it to stop, while continuing all other regular AEDs. The levetiracetam was reduced from an original dose of 750mg twice daily by 500mg every week with an aim to stop. This resulted in a considerable improvement in the patient’s agitation with a complete halt in the nighttime aggressiveness. His sleep wake cycle normalized and he started sleeping longer at night. His Ranchos Los Amigos scale improved from 4(confused-agitated) to 6(confused-appropriate). The patient could now participate more with the team of trained therapists in memory and attention exercises as well as regaining independence in activities of daily living.
Discussion:
TBI particularly in elderly aged over 64 years has a worse functional outcome as compared to non elderly.1Closed head injury in older adults produces considerable cognitive deficits in the early stages of recovery2 and there have been studies suggesting TBI to be a risk factor for developing Alzheimer’s disease.3Memory deficits, attention problems, loss of executive function and confusion are common after TBI.4This impaired cognitive function reduces the patient’s ability to recognize environmental stimuli often resulting in agitation and aggression towards perceived threats.TBI by itself may result in a variety of sleep disorders ranging from hypersomnia, narcolepsy, alteration of sleep wake cycle, insomnia to movement disorders. 5Sleep wake schedule disorders following TBI are relatively rare and may clinically present as insomnia.6Often these sleep disorders result in additional neurocognitive deficits and functional impairment, which might often be attributed to the original brain injury itself and thus be left without specific treatment.
While dealing with disrupted sleep pattern and agitation in the elderly following TBI, treatable causes such as neurological, infectious, metabolic, and medications should be ruled out. This is imperative as they disrupt rehabilitation and achievement of functional goals. Long duration of agitation post TBI has been associated with longer duration of rehabilitation stay and persisting limitations in functional independence.7After ruling out all the treatable causes the first focus is on environmental management with provision of a safe, quiet, familiar, structured environment while reducing stimulation and providing emotional support. The next step is introduction of pharmacotherapy to reduce agitation. Though a variety of pharmacological agents are available, there is no firm evidence of efficacy of any one class and often the choice of drug is decided by monitoring its effectiveness in practice and watching for side-effects.8In pharmacotherapy, the general principle followed is start low and go slow while developing clear goals to help decide when to wean and stop medications. Atypical antipsychotics are often used for the agitation while benzodiazepines and non benzodiazepine hypnotics such as zopiclone are recommended for treatment of insomnia.9 However atypical antipsychotics carry a FDA black box warning being associated with increased risk of stroke and death among elderly.
But what does one do when all optimal non pharmacologic and pharmacologic measures fail? That brings us back to the drawing board which in this case led the team to rethink Levetiracetam, a novel new antiepileptic that has been used as monotherapy for partial seizures and adjunctive therapy for generalized tonic clonic and myoclonic seizures. Levetiracetam treated patients have been reported to have psychiatric adverse effects10 including agitation, hostility, anxiety, apathy, emotional lability, depersonalization, and depressionwith few case reports of frank psychosis 11.While in healthy volunteers levetiracetam is noted to consolidate sleep12,in patients with complex partial seizures, levetiracetam has been noted to cause drowsiness decreasing day time motor activity and increasing naps without any major effects on total sleep time and sleep efficiency during night.13There has been an isolated report of psychic disturbances following administration of levetiracetam and valproate in a patient with epilepsy which resolved following withdrawal of valproate. 14However in practice it is used for recurrent post TBI seizures as it is a potent AED with a relatively mild adverse effect profile and no clinically significant interactions with commonly prescribed AEDs.15
Any adverse drug reaction (ADR) should be evaluated while keeping the patients clinical state in mind. This was, indeed, difficult in our case. With a history of TBI and cognitive decline, it became difficult to ascertain whether the neurocognitive issues were purely due to the nature of TBI or due to an ADR. Assigning causality to a single agent is difficult and fraught with errors. Using the Naranjo algorithm 16, with a score of 7/10(probable ADR) and a notable response on withdrawal of the offending drug as in this case helps establish possible causality.
This is a rare instance where sleep wake cycle disorder and agitation resolved following withdrawal of Levetiracetam in an elderly patient with TBI. This in turn led to the patient having a stable mood so that therapists could communicate and interact with him in order to improve basic cognitive functions such as attention, memory, thinking and executive control. This case illustrates the constant need to systematically and frequently reassess patients as they recover from TBI.
†Appendix: Ranchos Los Amigos Levels of cognitive functioning.
Total knee replacement (TKR) is an effective and cost-effective intervention for advanced osteoarthritis (OA). Pain is the main indication for the procedure, and the majority of patients undergoing a TKR gain significant pain relief 1-3.
However, an important minority of those who undergo a TKR have persistent pain in the operated knee 4. Baker showed that 19.8% of patients with data in the National Joint Registry had persistent knee pain, and 18.2% were dissatisfied with the procedure 5. Anderson, in a study of 98 patients, found that 8.1% claimed that the operated knee was worse at follow-up (2-3 years after surgery) than prior to surgery and 9.2% were dissatisfied Wylde et al reviewed the available literature in 2009, and found that 10-20% of patients report significant pain in the operated knee, and that the patient centred outcomes of TKR appear to be considerably worse than those of total hip replacement, where as implant survivorship figures are fairly similar 7.
There are numerous possible causes of pain after a TKR, including anterior knee pain arising from the patello-femoral joint and extensor apparatus, prosthesis loosening, or infection. Other likely causes include soft-tissue periarticular problems, referred pain, pain sensitisation, or neuropathic painBecause of the risk of infection, and the possible need for further surgery, orthopaedic surgeons are generally keen to investigate these patients thoroughly and exclude surgical causes of the problem. However, there seems to be background pain vulnerability in the knee causing this high incidence of post-operative pain. The pain itself clearly needs appropriate management but patients also need surgical evaluation to exclude important reversible causes.9
In spite of this being a sizeable and worrying problem in orthopaedics, very little has been written about the assessment or management of these patients. No protocols or guidelines are available and the costs of management have not been explored.
In this paper we describe the first case series of patients with chronic knee pain after a TKR, and document the investigations and treatment undertaken, and the direct financial costs of their care to the NHS Trust in which they were seen.
RD&E provides an Arthroplasty tertiary referral service for a large area but is a large District General Hospital and as such the costs and results we report should be representative of most trusts within the UK
Methods:
A specialist service for revision knee surgery is available at the Royal Devon and Exeter Hospital, resulting in the referral of patients with problems in a knee after a TKR. A registry of such patients has been established at the hospital. The data presented here is based on examination of the records of 41 of these patients. These were patients with a painful TKR who had been referred to one of the authors from Orthopaedic specialists in various institutions including the resident hospital.
The notes of these patients were analysed to ascertain the number of appointments patients’ had attended to address the TKR problem, and what investigations and treatments had been undertaken for that problem both by the originating surgeon and by the revision knee specialist.
In addition data was obtained from the Trust on the current costs of the clinic appointments, investigations and any treatment or interventions undertaken.
Results:
The 41 patients studied included 27 women and 14 men, with a mean age of 63.9 years (range 49-81) at time of initial TKR. In the year 2009, 536 TKR’s were performed in the trust with an average age of 70.5 years (range 37-94) with 298 females and 238 males.
Investigations were commenced for abnormal pain post total knee replacement on average 15 months (range 1-84) after their knee replacement. Appointments and investigations were undertaken over a mean time of 20 months from initial investigation (range 7-45).
Neuropathic pain was diagnosed in 6 patients and instability was identified as a cause in 5 patients. 4 patients suffered aseptic loosening and no diagnosis was made in 26 patients (63%).
Table 1 shows the average number of appointments attended and investigations undertaken on these 41 patients.
Data on the costs of these appointments, investigations and treatments to the local NHS Trust are presented in Table 2.
Table 1- Number of appointments and investigations per patient with a painful TKR
Ave appt/pt
Range
Orthopaedic appointment
4.37
2 – 11
Pain team appointment
2.05
0 – 6
Physiotherapy appointment
3.05
0 – 12
Hydrotherapy appointment
0.8
0 – 8
ESR/CRP/WCC/PV
7.75
2 – 38
X-rays
7.92
2 – 35
MRI/CT/Bone scan
0.41
0 – 2
Aspiration/Arthroscopies
0.51
0 – 3
Table 2 – Costs of appointments, investigations and treatments per patient
Ave cost/pt (£)
Orthopaedic appointment
370
Pain team appointment
235
Physiotherapy appointment
45
Hydrotherapy appointment
68
ESR/CRP/WCC/PV
21
X-rays
*
MRI/CT/Bone scan
70
Aspiration/Arthroscopies
1529
Operative Costs
2624
Drug Costs
174
Average cost/patient
5136
*= X-ray radiographs costs were insignificant and not charged to the NHS Trust
The outcomes of these 41 patients included medical management alone in 19 (14 of whom reported significant improvement) and further surgical interventions in 22 (14 of whom reported improvement). The calculated direct costs of investigation and management of those treated solely medically (i.e. non-surgically) was £190/patient, while the cost of those treated surgically was £5,051/patient. This is shown in table 3.
Table 3 – Comparison of operative versus non-operative costs
Average Surgical intervention cost/pt
Average drug therapy cost/pt
Total cost/pt
Operative patients (22)
4891.09
160.22
5051.31
Non-operative patients (19)
N/A
190.63
190.63
Discussion:
We have analysed the management of a case series of patients with persistent pain in the knee after TKR. The results show that most of the 41 people studied attended numerous appointments with different specialists, and had the same investigations (serology and x-rays) repeated on many occasions over a relatively short period of time (less than 2 years), often before referral to a surgeon with a specific revision knee interest. We have also shown that the investigations and treatment undertaken were costly to the NHS, particularly if specialist imaging investigations (CT or MRI) or further surgical procedures (including aspiration or arthroscopy) were undertaken. The costs to the patients of the numerous appointments and repeated investigations have not been included, but are likely to have been considerable.
The fact that many different appointments were offered, and many investigations repeated, along with the wide range of different approaches to the different patients, are indicative of the absence of clear patient pathways or of a co-ordinated clinical service for these patients. Patients were seen by orthopaedic surgeons, pain specialists and physiotherapists, but definitive diagnoses or management plans did not often result from these appointments, and investigations were often repeated unnecessarily. We do not believe that this situation is unique to our area, as there are no clear guidelines or protocols to help us know how best to investigate or manage these patients before referral and the natural history of the condition is unknown.
The investigations carried out most frequently were serological tests (ESR and CRP) to try to exclude infection, and x-rays to look for prosthesis loosening or other bony problems. Previous work has shown that a single test of ESR greater than 22.5 or CRP greater than 13.5, in this situation, has a sensitivity of 0.77 and a specificity of 0.93 for the diagnosis of infection Repeating these tests offers little help, and if these were positive it would seem more appropriate to proceed to joint aspiration 11, 12-15.
Similarly, there is little point in doing more than one x-ray study a year, as the rate of change in radiographic findings is slow. If a bone problem is suspected, other more sophisticated imaging modalities can be used 16-18.
The cost data obtained from our Trust show that high costs are incurred from new clinic referrals and visits, sophisticated imaging procedures (CT, MRI and bone scans), and surgical procedures – in particular revision surgery. These high costs of investigations would indicate that patients with a painful TKR would be more appropriately investigated and managed by specialist centres with early and meticulous evaluation by surgeons with a special interest in revision knee surgery.
The surgical costs of management of painful TKR’s dwarf the amount of money spent on medical (i.e. non-surgical) approaches. This considerable difference suggests that it is of paramount importance to manage the pain early, irrespective of whether surgery is required. Good pain management will allow the surgeon, and particularly the patient, to evaluate the problem in a clearer manner, weighing up the treatment options and making a decision from a more balanced position.
According to data from the National Joint Registry, over 53,000 TKRs were performed in NHS hospitals in England and Wales in 2009 19. Using the estimates of Baker, and Wylde and others on the numbers of these patients who are in pain or dissatisfied, we calculate that over 10,000 patients each year, in this country alone, are acquiring the problem of persistent pain in a TKR ,7This represents a huge public health problem, and one that, if our Trust’s cost figures are representative, is probably costing the NHS over £10 Million/annum. In view of that, we believe that this issue needs urgent attention from the research community and health care providers.
Figure 1 – Algorithm for assessment of a patient with a painful TKR
Our recommendation is that research is undertaken to document the natural history of pain in a TKR knee, differentiate the main causes of this pain, and develop simple algorithms to help clinicians make the correct diagnosis. We suggest a protocol that can be utilised by healthcare professionals to investigate painful TKR’s to allow correct assessment and diagnosis (Figure 1). We believe that health care providers in major orthopaedic centres should set up interdisciplinary clinics in which surgeons, pain specialists and physiotherapists can work together to help investigate and manage these patients.
Sleep is a fundamental part of our lives and about one-third of it is spent sleeping. Sleep deprivation has been linked with such high profile public disasters, as Chernobyl, the Challenger shuttle disaster and the nuclear meltdown at Three Mile Island. According to the US Highway Traffic Safety Administration, approx. 100,000 motor vehicle accidents are the result of driver’s drowsiness and fatigue1. There is an association of sleep disorders with anxiety and depression which may be bidirectional. Patients with insomnia for 2 weeks or longer, without current depression are at increased risk of developing major depression. Both insomnia and hypersomnia are considered independent predictors of depression and anxiety2.
Key Milestones in the Development of American Sleep Medicine:
The history of treatment of sleep disorders dates back to at least the use of opium as a hypnotic reported in ancient Egyptian text. Sleep medicine, however did not emerge as a distinct discipline until the 1970’s. Drs. Kleitman and Dement were significant early contributors to this field in the United States. In 1957 they first described Non Rapid Eye Movement (NREM) sleep and Rapid Eye Movement (REM) sleep and proposed the 4 stages of NREM sleep. In 1972 Dr. Dement, a Professor of Psychiatry and Behavioural Sciences at Stanford University School of Medicine, contributed to the establishment of the first sleep disorder centre in Stanford. After Stanford, other centres in New York, Texas, Ohio and Pennsylvania started providing sleep evaluations for which patients stayed in the centre overnight. The Association of Sleep Disorders Centre (ASDC) was established in 1975 and Dr. Dement served as its first president for12 years. In 1999 ASDC was renamed American Academy of Sleep Medicine (AASM). The first textbook of sleep medicine “Principles and Practice of Sleep Medicine” was published in 80’s. The journal SLEEP started in 1978. In 1998 the AASM commissioned the fellowship training committee to develop guidelines for sleep medicine fellowship training. The first two programmes to be granted formal accreditation were Stanford University in California and the Centre for Sleep and Wake in Montefiore Medical Centre, New York. The American Medical Association recognized sleep medicine as a specialty in 1996. In 2004 the Accreditation Council on Graduate Medical Education (ACGME) took over the fellowship accreditation process and approved a one year training programme 1,3,4,5.
Sleep Medicine training in Europe:
Unlike United States, there are no formal sleep medicine training programmes or qualification in the United Kingdom or Europe. Sleep medicine is restricted to a small group of respiratory physicians with a special interest in sleep medicine. Psychiatry trainees are exposed to very little formal teaching in sleep medicine. However in the last 3 years the neuropsychiatry section of the Royal College of Psychiatrists of the United Kingdom has formed the “sleep working group” under the leadership of Dr. Hugh Selsick, this group is responsible for increasing awareness of sleep medicine among British psychiatrists, by emphasizing the importance of sleep medicine in psychiatric practice and encouraging psychiatrists to contribute to the field of sleep medicine. This group has developed a competency based curriculum that incorporates the training of sleep medicine into the psychiatry curriculum, to organize sleep medicine symposia at annual conferences of the Royal College and to develop professional training (CPD) modules for psychiatrists. British Sleep Society is another forum that brings together physicians from various backgrounds interested in sleep medicine. Royal Society of Medicine also has a sleep medicine section which organizes various conferences. There are two, week long courses on sleep medicine, the Edinburgh and Cambridge courses. Recently the University of Glasgow started a Master’s of Science (MSc) in behavioural sleep medicine program for healthcare providers working in Scotland, the rest of the United Kingdom and Europe 6, 7, 8, 9. There is a trans-European move to start a formal sleep medicine certification similar to what we have in the United States. European Sleep Research Society (ESRC), a professional body of sleep scientists in Europe responsible for promoting sleep research and sleep medicine is starting its “first ESRS certification examination” in sleep medicine; this examination is scheduled to take place on September 4th, 2012 at the 21st Congress of the European Sleep Research Society in Paris. Since there are no formal training programmes this will be for those without formal training 10.
Psychiatry and Sleep:
Asking about the patient’s sleep is an integral part of a psychiatric consultation. Almost all the medication that psychiatrists prescribe has an effect on sleep architecture. Some psychiatric medications are used to treat sleep disorders and others can cause sleep disorders like Restless Legs Syndrome and PMLD. Understanding sleep can help us understand the mechanism of psychiatric illness. Many psychiatric disorders have comorbid sleep disorders and several behavioural therapies have been used successfully for the treatment of sleep disorders. There is bidirectional association between sleep disorders and psychiatric disorders. With the growing population of military soldiers returning from Iraq and Afghanistan with post-traumatic stress disorder, sleep problems and depression, there is an increased need for psychiatrists who possess knowledge in both sleep disorders and comorbid psychiatric illness. Psychiatrists have a distinct advantage dealing with sleep disorders and can bring those skills to sleep medicine.
Are psychiatrists attracted towards sleep medicine? The answer is yes. In the recent years we have seen an increased interest among psychiatry trainees for a sleep fellowship in United States. In recognition of behavioural consequences of sleep problems and multidisciplinary approach in sleep disorders, fellowship programmes are increasingly taking applicants from various backgrounds and not just pulmonology and neurology. Many psychiatry trainees are choosing a sleep medicine elective earlier in residency. Currently there are more than 710 accredited sleep centres in the United States. Many major university medical centres have a one year fellowship programme accepting applications from physicians from various backgrounds including Psychiatry, Neurology, Internal Medicine, Pulmonology, Paediatrics, ENT and Anaesthesia 1. There are more than 24 AGME approved sleep medicine fellowship programmes in the United States 11. New fellowship programmes are being opened at the University of Kansas Medical Centre and the University of Texas Health Sciences Centre, San Antonio.
Conclusion:
Sleep medicine is a new and exciting field of medicine with potential to grow in future. It’s a multidisciplinary field. American sleep medicine has evolved greatly over the last 30 years and there appears to be much to learn from the American model. There is a need for the psychiatry training programs both in the United States and the Europe to encourage and prepare their trainees to consider training in sleep medicine. Psychiatry trainees in the United States interested in sleep medicine should speak with their programme directors early in their residency training to register their interest and residents should also contact the local sleep centre for more advice. Each year American Academy of Sleep Medicine (AASM) accepts 10 international physicians for its 4 week mini-fellowship programme. Three weeks of the fellowship are spent at an AASM-accredited U.S sleep centre with their last week of the fellowship spent at the annual SLEEP conference. A certificate of training is issued at the end of the mini fellowship 12.
Non adherence to medication is a significant problem for client group in Psychiatry. Between a third and half of medicines that are prescribed for long term conditions are not used as recommended2, 3. In the case of Schizophrenia, studies reveal that almost 76% of the sufferers become non-compliant to the medication within the first 18 months of treatment 4.
Non-adherence has consequences for both clients and the Health Care System. If the issues of non-adherence are better identified and addressed actively, it has the potential of improving the mental health of our clients which will reduce the burden of cost to mental health resources. It is estimated that unused or unwanted medications cost the NHS about £300 million every year. This does not include indirect costs which result from the increased likelihood of hospitalization and complications associated with non-adherence5.
The WHO identified non-adherence as “a worldwide problem of striking magnitude”. This problem is not only just linked with our psychiatric client groups, but also is prevalent with most chronic physical conditions. It has been reported that adherence to medications significantly drop after six months of treatment6.
In broad term compliance is defined as the extent to which the patient is following the medical advice. Adherence on the other hand is defined as the behavior of the clients towards medical advice and their concordance with the treatment plan. Adherence appears to be a more active process in which patients accept and understand the need of their treatment through their own free will and portray their understanding with either a positive or negative attitude towards their medications7
Unfortunately there is no agreed consensual standard to define non adherence. Trials suggest a rate of >50% compliance as adequate adherence while other researchers believe it should be at least >95%. As per White Paper of DOH (2010), it has been recommended that clinicians have the responsibility to identify such issues and improve collaborative relationships among multidisciplinary teams to deliver a better clinical and cost effective service8.
Methods:
Sampling:
Our cohort included a prospective consecutive sample of 179 patients. The study was conducted in North Essex Partnership NHS Trust which provides general adult services for a catchment area of approximately 147,000 in Tendring area. All these clients were seen at the out patient’s clinic at Clacton & District Hospital. Informed consent was taken as per recommendation of local clinical governance team. The study was conducted during a 2 month period from October to November in 2010. No patient was excluded from the study. Sample consists of clients who were aged 16years and above.
Tools Used:
All the clients were asked questions using a standard questionnaire and MARS (Medication Adherence Rating Scale). MARS was developed by Thompson et al in 1999 as a quick self-reported measure of adherence mainly around psychiatric clients. It was mainly devised from a 30 item Drug Attitude Inventory (DAI) and a 4 item Morisky Medication Adherence Questionnaire (MAQ). The validity and reliability of MARS has been established by Thompson et al and then Fialko et al in 2008 in a large study and has been reported to be adequate9,10.
The patient questionnaire directly asked clients about their current medications and dosage regimens. It also enquired about various factors leading to non-compliance. It included factors like whether the medication makes them feeling suicidal, causes weight gain, makes them aggressive, causes sleep disturbances, causes sexual side effects, the form and size of tablets, stigma and family pressure, their personal belief about medication or do they feel that they become non adherent because as a direct effect and consequence of the illness.
Medication Adherence Rating Scale focuses both on adherence as well as the patient’s attitudes towards medications. It includes questions about how frequently they forget to take medications or are they careless about taking their medications. It also asks them if they stop taking their medication do they feel well or more unwell. Other aspects include whether they only take medicines when they are sick and do they believe that it is un-natural for their thoughts to be controlled by medications. It also asks about the effect of medication on them, such as; are they able to think clearly, or do they feel like a zombie on them?, or are they tired all the time?. It also checks their belief that if they remain compliant to medication, will it prevent them from getting sick again.
Results:
In total 179 clients were seen in the outpatient clinic during the period of two months. Out of those (54%, n=97) were females whereas nearly half (46%, n=82) were males. Age of the clients ranged from 18 years to 93 years. The mean age of the client group was 55; mode 41 and median was 69.5.
The diagnosis profile was quite varied. As far as the primary diagnosis is concerned, the majority (n=144) of service users were given a primary diagnosis using the ICD 10 criteria. Mood disorders were the most common primary diagnosis whereas personality disorder and anxiety were the most common secondary diagnosis. Table 1 show the number and percentage of the service users who presented with the most common diagnosed conditions:
Table 1: List of primary and Secondary diagnosis
Diagnosis
Primary
Secondary
Mood Disorders
72 (50%)
07 (26.92%)
Psychotic illness
25 (17.36%)
01 (3.85%)
Anxiety and PD
13 (9%)
13 (50%)
Dementia
24 (16.7%)
02 (7.69%)
Neurological disorder
07 (4.86%)
01 (3.85%)
Drugs related illness
02 (1.39%)
02 (7.69%)
Eating disorder
01 (0.69%)
00 (0.0%)
Subjectively 160 (89%) patients reported that they were compliant with medications whereas 19(11%) patients admitted that they have not been adherent to medications. Out of those who said that they were non-adherent, 8 were suffering from Mood disorders, 2 had schizoaffective disorder, 3 had psychotic illness, 3 had organic brain disorder, 2 clients had personality disorder, whereas 1 client had anxiety and 1 had neurological illness.
Prescription rate varied between different types of psychotropic medications. Antipsychotics were the most prescribed medication in our cohort. Table 2 shows data of each individual category.
Table 2: Number and percentage of individual medication category prescribed
Medication category
N=number of prescribed meds
% of total prescriptions
Antipsychotics
100
44%
Antidepressants
72
31%
Mood Stabilisers
21
09%
Anxiolytics
21
09%
ACH Inhibitors
12
05%
Hypnotics
04
02%
Less than half (39%, n=69) of service users had only one type of psychotropic medication whereas the majority (58%, n=104) of patients were on more than one psychotropic medication. A very small number of clients (3%, n=6) were not using any medications at all. When explored further it was revealed that almost two third of the antidepressant prescriptions comprised of SSRI’s (67%, n=55), about one fourth of SNRI (24%, n=21), a small proportion (6%, n=5) of NARI’s and very few (3%, n=3) were given tricyclic antidepressants. Similarly in antipsychotics, 75% of patients were on atypical and 25% were prescribed typical antipsychotics.
Factors leading to non-adherence:
Below is the graphical representation of what clients perceived as the major factors leading to the non adherence to the medication. Weight gain, illness effect, stigma and personal belief appear to be the major factors as displayed in Chart 1.
Chart 1: Number of responses for each individual factor leading to non-adherence:
Attitude towards Medications:
The overall Service users’ attitude towards medication did not appear to be particularly good. They mainly complained of getting tired and forgetting to take medication. Below in Chart 2 is the graphical representation of what overall attitude they had expressed towards psychotropic medications.
Chart 2: Number of responses for each factor indicating attitude towards medication
As far as overall MARS score is concerned, the majority of patients (63%, n=110) scored >6 and about one third of patients (37%, n=63) scored <6. A score of less than 6 is generally considered as a poor level of adherence which means that almost one third of our client group does not comply with medications.
Discussion:
The aim of our study was to highlight the importance of the factors which often lead to non-adherence to medications and to explore patients’ attitudes towards medications. Results are indicating that the problem of non-adherence is much wider and deeper in our clients group. There is a significant gap in between subjective and objective rate of adherence. However we should be mindful that adherence appears to be more of a continuum rather than a fixed entity e.g. some patients can be more adherent than others but still have inadequate adherence and hence arises the concept of partial adherence. It is evident from the results that patients’ attitudes were not encouragingly positive towards psychotropic medications.
Human beings are born potentially non-compliant. It is our tendency to crave and indulge in things which we know might not be good for our health e.g. eating non healthy food, alcohol and substance misuse. We have better compliance to issues which give us the immediate reward like pain relief or euphoria from illicit drugs where as because of lack of this immediate reward, our compliance gradually becomes erratic. Compliance and adherence appears to be a learnt phenomenon which needs to be nurtured throughout our life.
Manifestations of non-adherence:
The consequences of non-adherence are mainly manifested and expressed through clinical and economic indicators. Clinically it means an increase in the rate of relapse and re-hospitalisation. As per one study non-adherent patients have about a 3.7 times high risk of relapse within 6 months to 2 years as compared to patients who are adherent11. In US it was estimated that at least 23% of admissions to nursing homes were happening due to non adherence which meant a cost of $31.3 billion/380,000 admissions per year12. Similarly 10% of admissions happened for the same reason costing the economy an amount of $15.2 billion/3.5 million patients13,14. Figures in UK are also not much different where the cost of prescriptions issued in 2007-08 was estimated to be £8.1 billion and it was highlighted that £4.0 billion out of that amount was not used properly15. Similarly in terms of hospitalization, about 4% admissions happen every year happen because of non-adherenceThe total cost of hospitalization in 2007 was estimated to be £16.4 billion and it was suggested that non-adherence had a burden of costs in the region of £36-196 million17.
From a clinical aspect it has been suggested that non-adherence causes about 125,000 deaths just in the US every yearMet analysis has suggested significant statistical association between non-adherence and causing depression in certain chronic physical conditions e.g. Diabetes19.
Dimensional Phenomenon?
We need to be aware that adherence is a multidimensional and a multifaceted phenomenon and is better understood in dimensional rather than categorical terms. It has been widely accepted that if concordance is the process, then adherence will be the ultimate outcome. This was highlighted by WHO guidelines using following diagram:
Chart 3: WHO diagram of the five dimension of adherence:
Therefore any strategy developed to address the issue of non-adherence should be able to consider all these five dimensions; otherwise it will be less likely to have any chance of success.
Measures to improve Compliance:
All the known as clinical and economic indicators suggest that non-adherence issue needs significant attention and special measures which ought to be taken in order to avoid complications. There are already some running campaigns in other countries in order to improve adherence and we need to learn from their experiences such as the National Medication Adherence Campaign in US (March 2011). The campaign is basically a research-based public education effort targeting patients with chronic conditions, their family caregivers, and health care professionals20.
Levine (1998) demonstrated that the following steps may help in increasing adherence:
To appropriately asses the patient’s knowledge and understanding about the disease process and the need for treatment and to address those issues if there is some dysfunctional belief.
To link the taking of medication with other daily routines of the life
To use aids to assist medication adherence e.g. MEMS, ePills, Calendar or Dossette box
To simplify the dosage regimen
Flexible Health care team who is willing to support
Addressing current Psychosocial and environmental issues which might hinder the adherence21.
It is extremely important for the clinician to take time to discuss in detail with their patients all the possible side effects and indications of the prescribed medications. Unfortunately clinicians may not be able to predict the possibility of having side effects but can certainly educate patients about their psychopathology, indication and rationale for the medication and make them realise how important it is for them to remain adherent to medication. Health education is considered equally effective as compared to any sophisticated adherence therapy and should be used routinely22.Clinicians also have very important role to play in simplifying the dosage regimen and emphasise to the patients that “Medications don’t work in patients who don’t take them”23.
Various studies have tried to estimate the efficacy of a single factor and the multi factor approaches to improve adherence 24. Studies have showed proven efficacy for education in self management25,26, pharmacy management programmes27,28, nursing, pharmacy and other non medical health professional intervention protocols29,30, counselling31,32, behavioural interventions33,34 and follow up35,36. However multi factor approaches have been found to be more effective than single factor approaches,38Therefore it has been suggested that we need to address all the five dimensions of adherence (Chart 3) with multiple interventions to improve the adherence in our patients.
One factor potentially of concern leading to non-adherence is the possibility of the current overt or covert misuse of alcohol, illicit substances and over the counter available medications. This issue understandably can lead to partial or complete non adherence as well as worsening of existing psychiatric conditions. Therefore it needs to be explored further in future research projects.
Fibromyalgia (FM) is a challenging set of chronic, overlapping and debilitating syndromes with widespread pain, abnormal pain processing, sleep disturbance, fatigue and psychological distress.1 The American College of Rheumatology (ACR) 1990 diagnostic guidelines were based primarily on tender point examination findings at 11 of 18 potential tender points;2 however, lack of consistent application of these guidelines in clinical settings led the ACR in 2010 to develop new diagnostic criteria based on a Widespread Pain Index (WPI) and symptom severity (SS) scale with no requirement of a tender point examination. Symptoms must have been present for at least three months with the absence of any other disorder that would otherwise explain the pain and other signs and symptoms.3
Type of pain and other symptoms vary widely in FM, complicating diagnosis and treatment. A cross-sectional survey of 3,035 patients in Germany utilized cluster analysis to evaluate daily records of symptoms noted by patients on handheld computers. Five subgroups were described: four with pain evoked by thermal stimuli, spontaneous burning pain, pressure pain, and pressure pain combined with spontaneous pain; the fifth subgroup had moderate sensory disturbances, but greater sleep disturbances and the highest depression scores.4
Estimates of the prevalence of FM have varied based on case definitions and survey methods. Using 1990 ACR guidelines, it was estimated to affect between 0.1 to 3.3% of populations in western countries and 2.0% in the United States. Greater prevalence occurs among females, with estimates ranging from 1.0 to 4.9%.1, 5 Reasons for the gender difference have not been determined.6-9
Fibromyalgia Risk Factors
Identification of risk factors for FM has been complicated by the array of seemingly unrelated signs and symptoms. The United States Centers for Disease Control (CDC) notes loose association with genetic predisposition,10 bacterial and viral infections, toxins, allergies, autoimmunity, obesity and both physical and emotional trauma.1, 11
Chronic fatigue syndrome and infection
Although chronic fatigue syndrome (CFS) has been defined as a separate syndrome, up to 70% of patients with FM are also diagnosed with CFS and 35-70% of patients with CFS have also been diagnosed with FM.12 Thus studies of patients with CFS may have clinical relevance to FM. Several case controlled studies of CFS and one of CFS/FM have been associated with chronic bacterial infections due to Chlamydia (Chlamydophila p.), Mycoplasma, Brucella, and Borrelia.12-18 The most prevalent chronic infection found has been that of the various Mycoplasmaspecies.15-23
Mycoplasmas are commonly found in the mucosa of the oral cavity, intestinal and urogenital tracts, but risk of systemic illness occurs with invasion into the blood vascular system and subsequent colonization of organs and other tissues.15-23Mycoplasmal infections have been identified in 52 – 70% of CFS patients compared with 5 to 10% of healthy subjects in North America15-17, 19-22 and Europe (Belgium)23. For example, the odds ratio (OR) of finding Mycoplasma species in CFS was 13.8 (95% CL 5.8-32.9, p< 0.001) in North America.17A review by Endresen12 concluded that mycoplasmal blood infection could be detected in about 50% of patients with CFS and/or FM. A CDC case-control study attempted to replicate these findings based on the hypothesis that intracellular bacteria would leave some evidence of cellular debris in cell-free plasma samples. Results were that the healthy subjects actually had evidence of more bacteria although the difference was not significant. The authors noted the complexity and limitations of this type of analysis and also postulated that since the CFS patients were years past the onset of illness, they might have previously cleared the triggering agent.24 However, most studies found Mycoplasma DNA in intracellular but not extracellular compartments in CFS patients, and this could explain the discrepancy.15-23 Other studies have found that 10.8% of CFS patients were positive for Brucella species (OR=8.2, 95% CL 1-66, p<0.01)16 and 8% werepositive for Chlamydia pn. (OR= 8.6; 95% CL 1-71.1,p< 0.01)17.
The presence of multiple co-infections may be an especially critical factor associated with either initiation or progression of CFS. Multiple infections have been found in about one-half of Mycoplasma-positive CFS patients (OR = 18.0, 95% CL 8.5-37.9, p< 0.001), compared with single infections in the few control subjects with any evidence of infection.17 A North American study identified chronic infections in 142 of 200 patients (71%) with 22% of all patients having multiple mycoplasmal infections while just 12 of the 100 control subjects (12%) had infections (p<0.01) and none had multiple infections.15 Similarly, a European study reported chronic mycoplasmal infections in 68.6% of CFS and 5.6% of controls. Multiple infections were found in 17.2% of the CFS patients compared with none in the controls (p<0.001).23 Multiple co-infections were also associated with significantly increased severity of symptoms (p<0.01).15, 23
Viral infections associated with CFS have included Epstein Barr virus, human herpes virus-6, cytomegalovirus, enteroviruses and several other viruses.15, 25, 26
Despite indications of single or multiple bacterial and/or viral infections in most patients with CFS, antibiotic or antiviral treatments have yielded inconsistent results.27 Slow growing intracellular bacteria are relatively insensitive to most antibiotics and have inactive phases when they would be completely insensitive to any antibiotics.2823 Some treatments may actually have resolved the infections, but not the immune pathways that may remain in an activated state capable of producing symptoms.
Fibromyalgia and infection
Bacterial infections associated with FM as a separate syndrome have included small intestinal bacterial overgrowth (SIBO)29, 30 and helicobacter pylori (HP)31. Utilizing the lactulose hydrogen breath test (LHBT), investigators found SIBO in 100% of 42 patients with FM. They noted that 30-75% of patients with FM have also been found to have irritable bowel syndrome (IBS).29, 30 A confounding factor is that medications prescribed for FM often have gastrointestinal side effects.29 HP diagnosed by positive immunoglobulin gamma (IgG) serum antibody was significantly higher in women with FM (44/65 or 67.7%) compared with controls (18/41 or 43.9%) (p=0.025) in Turkey31.
Viral infections associated with FM have included hepatitis C, in which two studies found an association,32-34 and two studies found no association.35, 36 Associations with FM have also been found with hepatitis B, 37 human immunodeficiency virus (HIV)38, 39 and human T cell lymphotropic virus type I (HTLV-1).40
Fibromyalgia and non-infectious associations
Non-infectious triggers associated with FM have included toxins, allergens, and physical or emotional trauma. These triggers may not have been strictly “non-infectious” as allergens and toxins may also be produced by infections, and physical or emotional trauma may lead to the reactivation of previously controlled infections. Respondents to an internet survey of people with FM (n=2,596) also identified triggers as chronic stress (41.9%), emotional trauma (31.3%), acute illness (26.7%) and accidents (motor vehicle 16.1%, non-motor vehicle 17.1%).41 Physical trauma associated with FM has included cervical spine injuries as well as motor vehicle and other accidents.42-44
Fibromyalgia and autoimmunity
Three studies have found thyroid autoantibodies to be in greater percentages in subjects with FM compared with controls, in spite of normal thyroid hormone levels. One study reported autoantibodies in 41% of FM patients versus 15% of controls.45 The second study reported 16% in FM versus 7.3% in controls, p<0.01.46 The third study reported 34.4% in FM versus 18.8% in controls (p=0.025)47 and OR =3.87, 95% CL 1.54-10.13.48 This could also have been the result of thyroiditis, because infections like Mycoplasma are often found in thyroiditis patients.15
Autoantibodies to serotonin were identified in 74% of 50 patients with FM compared with 6% of 32 healthy (blood donor) controls. Notably, serotonin levels were normal in 90% of the FM patients indicating serotonin receptor involvement.49
Fibromyalgia and Metabolic Syndrome
Metabolic Syndrome consisting of abdominal obesity, high triglycerides, high blood pressure, elevated fasting glucose and decreased high-density lipids, was associated with FM in a U.S. study in which cases were 5.6 times as likely to have Metabolic Syndrome as controls (C2MH = 3.84, p = .047, 95% CL 1.25 – 24.74).50
Fibromyalgia and emotional trauma
Although emotional trauma has been acknowledged as a contributing factor, most studies of CFS/FM have used recognized tests such as Beck’s Depression Index, Beck’s Anxiety Index and Minnesota Multi Personality Index (MMPI) to exclude potential subjects with actual psychiatric illnesses.51, 52
Psychological and physiological subsets of fibromyalgia
A Wisconsin cross sectional survey of 107 women with confirmed diagnoses of FM used validated psychological and physiological measures followed by cluster analysis. Four distinct subsets were identified: (I) history of childhood maltreatment and hypocortisolism with the most pain and disability; (II) “physiological dysregulation” described as “distinctive on nearly every biological index measured” with high levels of pain, fatigue and disability; (III) normal biomarkers with intermediate pain severity and higher global functioning; and (IV) psychological well-being with less disability and pain.53
The “physiological dysregulation” of FM subset II consisted of the highest antinuclear antibody (ANA) titers (t=4.06, p=0.001), highest total cholesterol levels (t=3.96, p<0.001), larger body mass index (BMI) values t=2.21, p<0.04), lowest Natural Killer (NK) cell numbers (t=3.95, p<0.001), lowest growth hormone (t=3.20, p<0.002), and lowest testosterone levels (t=3.80, p<0.001). Trends were also indicated toward the highest erythrocyte sedimentation rate (ESR) (t=2.02, p=0.056), lowest creatinine clearance (t=1.85, p=0.067) and lowest cortisol (t=2.78, p<0.007).53
Proposed Model of Fibromyalgia
The authors’ proposed model of FM develops a rationale for the “physiological dysregulation” indicated in subset II of the Wisconsin study. In this model, various triggers are followed by prolonged immune activation with subsequent multiple hormonal repression, disrupted collagen physiology and neuropathic pain.
Activation of immune response pathways
Innate immune responses begin with anatomical barriers, such as the epithelium and mucosal layers of the gastrointestinal, urogenital and respiratory tracts, and physiological barriers, such as the low pH of stomach acid and hydrolytic enzymes in bodily secretions. 54 Breeching of these barriers activates cell-mediated immunity launched by leucocytes with pattern recognition receptors: neutrophils, macrophages and dendritic cells (DCs).54 Insufficient or damaged anatomical or physiological barriers would necessarily keep this cell mediated level of innate defense in a constant state of alert and activity.
In contrast to the innate immune response, adaptive immunity has highly specific recognition and response activities resulting in lasting changes produced by leukocytes known as lymphocytes. B lymphocytes (B cells) secrete plasma cells producing antibodies to specific pathogens. T lymphocytes (T cells), the other major cells of adaptive immunity, can be either cytotoxic (Tc) or helper cells (Th). Tc cells produce progeny that are toxic to non-self peptides and Th lymphocytes secrete small proteins (cytokines) that mediate signaling between leukocytes and other cell types. All types of lymphocytes retain memory so that subsequent invasions provoke faster and more rapid differentiation into effector cells. 54, 55 SomeTh cells respond to intracellular pathogens (Th1) and some to extracellular pathogens (Th2). A third type (Th17) appears to respond to certain bacterial and fungal infections, tumor cells and are also involved in autoimmune diseases.56
In the presence of environmental stressors, cells may release stress proteins to alert the organism to potentially damaging conditions. These proteins can bind to peptides and other proteins to facilitate surveillance of both the intracellular and extracellular protein environment. One form of stress proteins, heat shock proteins (HSP), can mimic the effects of inflammation and can be microbicidal.52, 57
One of the earliest responses to intracellular viral or bacterial infections involves production of three types of interferon (IFNa, IFNb and IFNg). Any of these can initiate a series of metabolic events in uninfected host cells that produce an antiviral or anti-bacterial state.58, 59 When IFN-γ targets genes in uninfected cells, the targeted genes become microbicidal by encoding enzymes generating oxygen (O2) and nitric oxide (NO) radicals.58 Activation of O2 or NO radicals triggers another cascade involving IL-6, IL-1b, the cytokine Tumor Necrosis Factor-a (TNF-a) and the transcription nuclear factor kB (IKKb-NF-kB). NF-kB can be activated by a variety of inflammatory stimuli, such as cytokines, growth factors, hormones, oncogenes, viruses and their products, bacteria and fungi and their products, eukaryotic parasites, oxidative and chemical stresses, therapeutic and recreational drugs, additional chemical agents, natural products, and physical and psychological stresses.60 Activation of NF-kB releases its subunits; the p50 subunit has been associated with autoimmunity and the RelA/p65 unit with transcriptional activity involving cell adhesion molecules, cytokines, hematopoietic growth factors, acute phase proteins, transcription factors and viral genes.61 The authors propose that chronic infection or other stress would be a sustaining trigger of an immune cascade that includes NF-kB and resultant cell signaling processes that drive many of the symptoms of fibromyalgia.
The cytokine interleukin-6 (IL-6) can either activate or repress NF-kB through a switching mechanism involving IL-1ra and Interleukin 1b(IL-1b). IL-6 first activates Interleukin 1b (IL-1b), which then activates TNF-a, leading to the subsequent activation of NF-kB. 62, 63 Specifically, the release of the RelA/p65 subunit of activated NF-kB switches on an inhibitory signaling protein gene (Smad 7) that blocks phosphorylation of Transforming Growth Factor Beta (TGF-b) resulting in the repression of multiple genes. Alternatively, IL-6 activates IL-1ra, which allows TGF-b to phosphorylate and induce the expression of activating signaling protein genes Smad2 and Smad3, resulting in the full expression of multiple genes.61
NF-kB plays a key role in the development and maintenance of intra- (Th1) and inter- (Th2) cellular immunity through the regulation of developing B and T lymphocytes. The p50 dimer of NF-kB has been shown to block B Cell Receptor (BCR) editing in macrophages, resulting in loss of recognition and tolerance of host cells (autoimmunity). T cells that are strongly auto-reactive are normally eliminated in the thymus, but weakly reactive ones are allowed to survive to be subsequently regulated by regulatory T-cells and macrophages. Acquired defects in peripheral T-regulatory cells may mean failure to recognize and eliminate weakly reactive ones.54, 64 The IL-17 cytokine associated with autoimmunity can activate NF-kB through a pathway that does not require TNF-a.56 NF-kB activity can also be activated or repressed by the conversion of adenosine triphosphate (ATP) to cyclic adenosine monophosphate (c AMP) in the early phases (3 days) of nerve injury through its main effector enzyme, protein kinase A (PKA).65, 66 PKA decreases during later stages as the enzyme protein kinase C (PKC) increases. PKC then plays important roles in several cell type specific signal transduction cascades.67 An isoform of PKC within primary afferent nociceptive nerve fibers signals through IL-1b and prostaglandins E2 (PGE2) as demonstrated in animal studies.68 This process has been called “hyperalgesic priming,” and it has been described as responsible for the switch from acute to long-lasting hypersensitivity to inflammatory cytokines.69
Figure 1 depicts key immune pathways leading to expression or repression of multiple genes proposed to be important in FM and neuropathic pain.
Fibromyalgia and immune - hormonal interactions
Reciprocity exists between the immune system and the hypothalamic-pituitary-adrenal (HPA) axis through its production of glucocorticoid signal transduction cascades. 63, 70, 71. Hormones such as cortisol (hydrocortisone) produced by the adrenal cortex, affect metabolism of glucose, fat and protein.72 The glucocorticoid receptor (GR), a member of the steroid/thyroid/retinoid super family of nuclear receptors is expressed in “virtually all cells”. When the GR in the cytoplasm binds a glucocorticoid, it migrates to the nucleus where it modulates gene transcription resulting in either expression or repression of TNF-a, IL-1bβ and the NF-kB p65/RelA subunit. However, the RelA/p65 protein can also repress the Glucocorticoid Receptor. 63, 70, 71, 73
Growth hormone (GH), an activator of NF-kB,74 is usually secreted by the anterior pituitary, but changes found in FM may be hypothalamic in origin. GH is needed for normal childhood growth and adult recovery from physical stresses.75 Although low levels of GH were found in subset II of the Wisconsin study, 53 functional deficiency may be expressed as low insulin-like growth factor 1 (IGF-1) combined with elevated GH, suggesting GH resistance.76, 77 Defective GH response to exercise has been associated with increased pain and elevated levels of IL-1b, IL-6, and IL-8.77, 78
The hormones serotonin and norepinephrine modulate the movement of pain signals within the brain. Serotonin has been found to suppress inflammatory cytokine generation by human monocytes through inhibition of the NF-kB cytokine pathway in vitro;79 however, NF-kB promotion of antibodies can repress serotonin.49 Selective serotonin and norepinephrine reuptake inhibitors (SSNRIs), such as duloxetine and milnacipran, are key treatment options for fibromyalgia and have been approved of by the U.S. Food and Drug Administration (FDA).80, 81 Although serotonin has been best measured in cerebral spinal fluid (CSF), recently improved methods of collection were utilized (using rats and in 18 women) that yielded a high degree of correlation (r=0.97) between CSF and plasma, platelet, and urine measurements.82
NF-kB activation has also been documented to interfere with thyroid hormone action through impairment of Triiodothyronine (T3) gene expression in hepatic cells. 83 However, T3 administration has induced oxidative stress and activated NF-kB in rats.84
Metabolic Syndrome, a confounding factor in Fibromyalgia
Leptin and insulin hormones interact to regulate appetite and energy metabolism. Leptin, produced by adipose cells, circulates in the blood eventually crossing the blood-brain barrier to bond with a network of receptors within the hypothalamus. Insulin, produced by beta cells in the pancreas, similarly crosses the blood brain barrier to interact with its own network of hypothalamic receptors. Leptin and its receptors share structural and functional similarities to long-chain helical cytokines, such as IL-6, and it has been suggested that leptin be classified as a cytokine.85-89
Metabolic syndrome can be a confounding factor in FM due to peripheral accumulation of fatty acids, acylglycerols and lipid intermediates in liver, bone, skeletal muscle and endothelial cells. This promotes oxidative endoplasmic reticulum (ER) stress and the activation of inflammatory pathways involving PKC and hypothalamic NF-kB, leading to central insulin and leptin repression.85-87, 89-91Hyperinsulinemia further stimulates adipose cells to secrete and attract cytokines such as TNFa and IL-6 that trigger NF-kB in a positive feedback loop, which can be complicated by chronic over nutrition that increases the generation of reactive oxygen intermediates and monocytechemoattractant protein-1 (MCP-1).87, 89 When exposed to a chronic high fat diet, hypothalamic NF-kB was activated two fold in normal mice and six times in mice with the obese (OB) gene.89
Fibromyalgia and indicators of immune-hormonal activity
Although most components of either innate or adaptive cell mediated immune responses exist for only fractions of seconds, some of their effects and products can be detected long after in the skin, muscle, blood, saliva or sweat92, 93.
One component, nitric oxide (NO), can suppress bacteria; however, endothelial damage causes dysfunction with impaired release of NO and loss of its protective properties.86 The enzyme transaldolase acts as a counterbalance by limiting NO damage to normal cells. Thus, high levels of transaldolase indicate elevated reactive oxygen species, reactive nitrogen species (ROS/RNS) and cellular stress. The “exclusive and significant over-expression of transaldolase” in the saliva samples of 22 women with FM compared with 26 healthy controls (77.3% sensitivity and 84.6% specificity, p<0.0001; 3 times greater than controls; p=0.02) was “the most relevant observation”; although there was no correlation between transaldolase expression and the severity of FM symptoms.92
High levels of NO have been associated with high levels of insulin, and insulin itself is a vasodilator that, in turn, can stimulate NO production. Beta cells of the pancreas are quite susceptible to ROS/NOS damage .86When free radical damage of beta cells reaches critical mass, insulin production plummets with an associated decline in NO levels. Thus, patients with FM who have high NO levels would likely be suffering from associated metabolic syndrome, and patients with low NO levels would likely be suffering from Type II diabetes.85, 88
Figure 2 illustrates the relationship of NF-kB to various hormone systems.
Fibromyalgia and immune-hormonal influences on connective tissue
Inflammation of muscles, tendons, and/or fascia is generally followed by proliferative and remodeling phases of healing initiated by fibroblasts which lay down an extracellular matrix (ECM) composed of collagen and elastin fibers. “Fibroblasts respond to mechanical strain by altering shape and alignment, undergoing hyperplasia and secreting inflammatory cytokines including IL-6.” The extra cellular matrix is initially laid down in a disorganized pattern that is subsequently matured and aligned. Chronic and excessive mechanical tension from postural imbalance, hormonal disruption or other factors may interfere with collagen maturation. 94 Remodeling of the extracellular matrix and collagen deposition around terminal nerve fibers may be compressive and contribute to neuropathic pain.95
Oxidative stress in muscles accelerates the generation of advanced glucose (glycation) end products (AGEs). AGE-mediated cross-linked proteins have decreased solubility and are highly resistant to proteolytic digestion. Interaction of AGEs with their receptors leads to activation of NF-kB resulting in an increased expression of cytokines, chemokines, growth factors, and adhesion molecules.9697
Two AGE products have been reported at significantly elevated levels in the serum of patients with FM: N-carboxymethyllysine (CML) (2386.56 ± 73.48 pmol/mL; CL 61.36-2611.76 versus controls 2121.97 ± 459.41 pmol/mL; CL 2020.39-2223.560; p<0.05)96 and pentosidine (mean 190 ± 120 SD and median 164 versus controls mean 128 ± 37 SD and median 124; p<0.05)97 Comparison of muscle biopsies showed “clear differences in the intensity and distribution of the immunohistochemical staining”. CML was seen primarily in the interstitial tissue between the muscle fibers where collagens were localized and in the endothelium of small vessels of patients. Activated NF-kB was seen in cells of the interstitial tissue especially around the vessels of patients, but almost no activated NF-kB was seen in the control biopsies. AGE activation of NF-kB has been shown to be significantly more prolonged than the activation of NF-kB by cytokines.9697
Fibromyalgia, the nervous system and pain
Sensory transmission in humans occurs through three primary afferentnerve fiber types: heavily myelinated mechanical afferent pathways (A Beta fibers) that transmit non-noxious tactile sensations, small-diameter myelinated fibers (A Delta fibers) that transmit sharp pain, and small diameter unmyelinated fibers (C fibers) that transmit dull aching pain. The heavily myelinated non-pain Aβ fiber type has been shown to sprout axons that terminate on pain lamina in the posterior horn of the spinal cord resulting in the conversion of mechanical stimuli to pain. Within the brain, sensitization of the N-methyl D-aspartate (NMDA) receptors can amplify pain signals between the thalamus and the sensory cortex.67, 98
Chronic damage or excitation of nociceptive afferent fibers from compressive collagen deposition may develop into spontaneous (ectopic) firing oscillating at frequencies sufficient to initiate cross (ephaptic) excitation of sympathetic and sensory fibers (myelinated A-delta and non-myelinated C fibers) within the dorsal root ganglia (DRG) of the central nervous system.98 Normally, the DRG has little sympathetic innervation, but trauma can trigger sympathetic sprouting that forms basket-like structures within the DRG. Neurotrophins, in particular nerve growth factor (NGF), play an important role in sympathetic fiber sprouting of sensory ganglia in murine models. DRG can be reservoirs for latent viral infections such as Herpes Zoster, HIV and enteroviruses. In addition, the Borrelia species has been identified in a non-human primate model of Lyme disease. NGF also facilitates expression of Substance P (SP), a peptide neurotransmitter involved in the induction of the IL-6 - NF-kB pathway 60, 99, 100 and in the transmission of neuropathic pain.101, 102 SP has been found to be elevated in the cerebrospinal fluid of patients with FM in comparison to normal values,103 and control subjects.104
Summary and Conclusions
Chronic unresolved infection, trauma, and/or emotional stresses that trigger immune pathways with subsequent chronic hormonal and nervous system responses is proposed to perpetuate chronic neuropathic pain. Figure 3 provides a summary model of immune-hormonal contributions to neuropathic pain in fibromyalgia.
The ACR criteria and severity scales have defined fibromyalgia and The Wisconsin study has identified psychological and physiological subsets that are critical steps in its characterization. This type of testing could be further strengthened through the use of specific biomarkers. Potential markers of FM status include the RelA/p65 and p50 subunits of NF-kB, which are currently the focus of several clinical trials of other chronic painful conditions. Additional potential markers include: IL-6, IL-1b, TNF-a, PKC, transaldolase, CML, pentosidine and NGF. Substance P has been previously identified as a marker of pain, but is problematic as a marker for FM, since it has only been measured in the CSF. The search for markers that are truly specific to FM may continue to be a difficult task due to their overlap with other metabolic conditions, such as CFS, metabolic syndrome, type II diabetes, and IBS. Nonetheless, these markers remain important as they can indicate oxidative stress, cytokine activation, hormonal dysregulation and neuropathic pain. These potential FM markers need to be evaluated in clinical trials where they can be measured over time and correlated with patient symptoms.
Currently, family and general medical practice physicians are uniquely positioned to establish the FM diagnosis, determine subsets of FM patients, investigate potential triggers of chronic immune activation, advise patients, prescribe medications and refer patients to appropriate specialists or pain centers. Establishment of the FM diagnosis requires use of the ACR Widespread Pain Index (WPI) and symptom severity (SS) scale, but no longer requires the tender point examination. 3
Determination of FM subsets can be accomplished using the approach used in the Wisconsin cross sectional survey.53 Investigation of potential triggers of chronic immune activation needs to include sources of underlying infection, unresolved physical or emotional trauma, toxins and food sensitivities. These investigations may be accomplished through careful interviewing and well-designed questionnaires. Advising the patient should acknowledge the reality of their pain and other symptoms and provide rational approaches to resolution of those symptoms. Prescribing of medications needs to be sensitive to current and previous patient experience with medications, in addition to following current guidelines for stabilizing FM symptoms. Referral to appropriate specialists and centers would include those with expertise in physical medicine, psychology and nutrition. Physical medicine can address pain and functional deficits; psychology can address underlying emotional issues and trauma; and nutrition can focus on resolution of chronic inflammation, oxidative stress, and intestinal dysbiosis.
Where do we go from here for additional FM treatment options? Immune modulators have been used successfully in other painful conditions, such as rheumatoid arthritis. Immune modulators acting on the IL-6 - NF-kB cascade have considerable potential for FM, but only after ruling out or successfully treating any underlying infections. Numerous pharmaceutical blockers of NF-kB exist, but most are associated with serious side effects. Natural products may provide additional options as some are able to mediate pathways leading to NF-kB without the same side effects.105Medications that elevate individual hormone levels have been included in accepted treatment protocols in the case of serotonin and norepinephrine. However, elevations of other hormones, such as cortisol and thyroid hormones, are under investigation and remain controversial. Elevation of individual hormones may be problematic because of the number of different hormones influenced by the IL-6 - NF-kB pathway.
In rheumatology clinics chronic painful conditions are the norm. Although many pain syndromes are associated with low mood and sometimes clinical depression, the mood disorder often goes unrecognised. Fibromyalgia is one such chronic pain syndrome, 'chronic' arbitrarily defined as lasting longer than six months. It is a common, poorly understood, musculoskeletal disorder which more often affects women between the ages of 25-50 years generally.
In nearly all patients three symptoms predominate, namely, neuropathic pain (nerve injury pain), fatigue and non-restorative sleep disturbance. The chronic neuropathic diffuse pain, described as whole body pain, is felt particularly in deep tissues such as ligaments, joints, and muscles of the axial skeleton in mainly the lower cervical and lumbar spine. The pain is often characterised by an exaggerated and prolonged response to a noxious stimulus (hyperalgesia). Patients may be considered to be malingering because there is no obvious explanation for the symptoms. Anxiety, stress and depression caused by fibromyalgia add insult to injury, with personality and cognitive factors coming into play in addition.1 Paraesthesiae (abnormal sensory sensations) or dysaesthesiae (painful sensations) of the extremities may also occur. There is no objective muscular weakness or neurological disorder to account for the symptoms, which adds to the diagnostic dilemma. For example, fibromyalgia affecting the supraspinatus muscle of the shoulder would limit initial abduction of the arm because of pain, not because of any muscle weakness. Cognitive function is sometimes described as 'fibrofog' or 'conscious confusion' and may be a primary symptom of fibromyalgia, reflecting impairments in working memory (a form of short-term memory), episodic (memory for events), and semantic memory (memory for words, rules, language).
Nociception refers to the process of information about harmful stimuli conveyed by neuronal activity up to the point of perception in the dorsal horn of the spinal cord where primary afferents synapse.2 Evidence is accumulating which shows that atypical sensory processing in the central nervous system (CNS) and dysfunction of skeletal muscle nociception are important in the understanding of fibromyalgia and other chronic pain syndromes.3The concept of 'central pain sensitization' or 'central sensitivity syndrome' considers fibromyalgia to be a disturbance of nociceptive processing which causes a heightened experience of pain or pain amplification.4 Because pain signals are subject to variation in amplitude, the modulation of sensory processing may be the key to understanding the pain response not only in fibromyalgia but also in other conditions, such as irritable bowel syndrome. Descending spinal noradrenergic and serotonergic neurons inhibit the neurotransmitters noradrenaline and serotonin, released from primary afferent neurons and dorsal horn neurons. Therefore, when descending inhibition is decreased, irrelevant nociceptive stimuli are more readily felt. Put another way, in patients with chronic pain syndromes descending inhibition may not be functioning adequately to prevent or mask irrelevant pain stimuli. When appropriate medication is used this normal descending inhibition is enhanced and pain is no longer troublesome.
The release of neurotransmitters (ligands) also requires a mechanism that involves voltage-sensitive calcium and sodium channels. Repetitive action potentials cause the calcium channels to open with the ensuing release of neurotransmitters into the synaptic cleft. The postsynaptic neurons are thus stimulated leading to molecular and structural changes (sprouting) which cause neuropathic pain. Drugs such as Pregabalin and Gabapentin bind to voltage-sensitive calcium channels and reduce calcium influx, which in turn diminishes pain. The concept of central pain sensitization now incorporates affective spectrum disorders and functional somatic syndromes. It seems that the more painful symptoms one has which are difficult to explain, the more likely the patient is suffering from a mood disorder. Dopamine may be involved in the regulation of cognition in the dorsolateral prefrontal cortex and could account for the cognitive deficits.5Because cingulate and prefrontal cortices are particularly implicated in pain modulation (inhibition and facilitation of pain), structural changes in these systems could contribute to the chronic pain associated with fibromyalgia.6
Many patients with fibromyalgia have an increased sensitivity to sensory stimuli that are not normally or previously painful (allodynia). In other words, minor sensory stimuli that ordinarily would not cause pain in most individuals induce disabling, sometimes severe pain in patients with fibromyalgia.7 In normal individuals 4 kg/square cm2pressure (approximately the pressure needed to blanch the skin at the top of one’s thumb) causes patients with fibromyalgia to wince with pain or suddenly withdraw when the tender point is palpated. This indicates that pain occurs at a lower pain threshold in fibromyalgia sufferers when this pressure is applied.
The pain of fibromyalgia may be aggravated by emotional stress though the latter is difficult to quantify and evaluate. For instance, corticosteroid hormones are released in high amounts after stress yet fibromyalgia is associated in some patients with a decreased cortisol response to stress. Stress may therefore initiate, inhibit or perpetuate alterations in the corticotrophin-releasing hormone (CRH) neuron, with associated effects on the hypothalamic pituitary axis (HPA) and other neuroendocrine axes.8
There are many other possible explanations for fibromyalgia pain. One of the major neurotransmitters involved in nociception is substance P, found in high concentrations in the spinal cord, limbic system, hypothalamus, and nigrostriatal system. It is involved in the transmission of pain impulses from peripheral afferent receptors to the central nervous system. Nerve growth factor (NGF), a cytokine-like mediator may indirectly exert its effect through enhancing glutaminergic transmission and could account for sustained central sensitization in fibromyalgia. 9, 10 Another neuropeptide, calcitonin gene-related peptide, a potent vasodilator, present in non-myelinated afferent neurons, may also play a role in pain pathology.5
Levels of the neurotransmitter serotonin have been found to be low in some studies in fibromyalgia patients. Although serum levels of serotonin are lower than in some patients with rheumatoid arthritis and healthy controls, the variation is too broad and therefore measurement of serotonin has not proved useful tool in determining a diagnosis of fibromyalgia. 11
Logically, pharmacologic agents used to treat pain in fibromyalgia would act by either increasing levels of inhibitory neurotransmitters or decreasing levels of excitatory neurotransmitter. In the United States of America (USA), Pregabalin was the first drug to be approved by the Food and Drug Administration (FDA) for the treatment of fibromyalgia and has been shown to improve pain, sleep and quality of life. It is ineffective against depression. The main inhibitory mediator in the brain, gamma amino butyric Acid (GABA), is formed from glutamate (excitatory) by the enzyme glutamate decarboxylase (GAD). It is particularly plentiful in the nigrostriatal pathways. About 20% of CNS neurons are GABAergic and it serves as a neurotransmitter at some 30% of all CNS synapses.12 Pregabalin increases neuronal GABA levels by producing a dose-dependent increase in glutamate decarboxylase activity. In a meta-analysis of 21 clinical trials to estimate treatment differences vs. placebo, statistically significant improvement was observed with Duloxetine, Milnacipran 200 mg/day, Pregabalin 300 or 450 mg/day, and Tramadol plus Paracetamol. The meta-analysis showed a statistically increased risk of discontinuation because of adverse events related to Milnacipran and Pregabalin.13
Antidepressants may improve fibromyalgia symptoms by reducing pain, stabilizing mood and improving sleep, though the effect seems to be modest. If abnormal sleep, and hence subsequent tiredness, precedes the development of fibromyalgia the effect of antidepressants may be primarily associated with improved sleep. However, the efficacy of tricyclic antidepressants is difficult to quantify and their limited superiority over placebo lasts no more than a few months. A meta-analysis of ten randomized double-blinded, placebo-controlled studies revealed only poor to moderate evidence for a beneficial effect at low doses of Amitriptyline (25mg daily) over 6-8 weeks. Even when given in higher doses or prescribed for a longer duration, Amitriptyline did not make a great deal of difference. 14
The efficacy of Selective Serotonin Reuptake Inhibitors (SSRIs) is also inconclusive. More promising results have been demonstrated with Serotonin and Noradrenaline Reuptake Inhibitors (SNRIs) such as Duloxetine. Both serotonin (5-HT) and noradrenaline (NA) exert analgesic effects via descending pain pathways. Pain is a prominent feature of depression and vice versa and the alleviation of one modifies the other. 15, 16 The reduction in pain reduces fatigue and Duloxetine improves mood.
Other drugs used in this condition include Milnacipran and Cyclobenzaprine (a muscle relaxant structurally related to tricyclic antidepressants). Milnacipran and Cyclobenzaprine are not available in the United Kingdom (UK). Tramadol (a serotonin and noradrenaline reuptake inhibitor) is a weak mu-receptor opioid agonist used to control pain but its adverse effects are those of opiates in general, mainly nausea and dependence.
Although other adjunctive non-pharmacological treatments have been advocated the results are disappointing. Assessments of non-drug treatments are generally mediocre. Aerobic exercises benefit some patients, especially when combined with biofeedback, patient education and cognitive therapy. A whole gamut of treatments such as graded exercises, yoga, dietary advice, balneotherapy (heated pool bathing), homeopathy, massage, acupuncture, patient education, group therapy and cognitive behaviour therapy, have been suggested and tried, but few of them demonstrated clear-cut benefits in randomized controlled trials.Support groups may help some patients. 17, 18, 19
Fibromyalgia is now considered to be, in part, a disorder of central pain processing. Central sensitization manifests as pain hypersensitivity, particularly allodynia, and hyperalgesia. It is believed that central sensitization occurs in part through the action of glutamate on the N-methyl-D-aspartate (NMDA) receptor, resulting in an increase in intracellular calcium and kinase activation, leading to hyperalgesia and allodynia.20
Response to standard analgesics is erratic and more promising results have emerged with drugs such as the SNRIs Duloxetine and Milnacipran, the anticonvulsants Gabapentin and Pregabalin, either used alone or in combination, or with other agents such as Amitriptyline. There is only modest evidence to support SSRIs and Tramadol. Treatment needs to be holistic and multidisciplinary, focussing on both physical pain management and psychological dysfunction. The multidisciplinary approach, though difficult to measure, may help by imparting a sense of empathy and support for patients. Overall, most patients with fibromyalgia continue to have chronic pain and fatigue with symptoms persisting for many years, but it is not necessarily a progressive disorder and some patients may show moderate improvement.
Injuries in children are common.1 In the UK, incidence is found to be 20.2 fractures per 1000 per year. The peak age of incidence is on average of 9.7 years .2 Up to 42 per cent of boys and 27 per cent of girls will sustain at least one fracture during the paediatric age. 3
A study conducted in Northern Sweden in the age group of 0 - 19 years showed that there is a rise in injury related visits to emergency department over the years. Fractures and dislocations accounted for 21.4 per cent of the cases.1Consequently, this will put a pressure on fracture clinics as new cases take a considerable in fracture clinic.
The purpose of this audit was to assess the pattern of new cases referred to fracture clinic at a large paediatric university teaching hospital
Materials and Methods
This prospective audit was carried out over a four-week period in May and June of 2010 and it was approved by the institutional clinical audit department. There were a total of 18 working days. A total of 864 patients were seen in the fracture clinic during this period, which included 310 new cases and 554 follow up cases. Data was collected from the fracture clinic patient list for the respective days and the new patient list was extracted from this. Using the picture archiving and communication system (PACS), the radiographs and reports were analysed to collect the data regarding the fracture sustained.
Results
Total number of cases seen during the 4 week period were 864, which included 310 new cases and 554 follow up cases. Two hundred and ninety two cases out of 310 were analysed, as 18 cases did not have radiographs available.
There were 170 males and 140 females. The average age was 9 years (range 1 month to 16 years).
One hundred and seventy seven (61%) showed fractures. One hundred and one (34%) cases did not have any fractures and 14 (5%) were suspected fractures.
The following figure 1 shows the pattern of cases on each working day. Those, which are left blank, are non working days or cancelled clinics. The average number of cases seen per day were 48 and of these, the average of new cases seen were 17.2 and the average number of follow up cases seen were 30.7.
As shown in figure 2, fractures of the distal radius and ulna were the predominant cases (23%) followed by hand fractures (15%). Other fractures included: lower limb excluding foot [23 (8%)], elbow and humerus [14 (5%)], clavicle [11 (4%)], foot [12 (4%)] and others [5 (2%)].
Further analysis of the fractures sustained showed that forearm injuries were the predominant cases and majority of them were buckle or greenstick fractures. The detailed distribution is shown in the figure 3 below.
Figure 1 showing the daily pattern of cases
Figure 2 showing the area involved
Figure 3 showing the pattern of fracture
Discussion
Fracture clinics are a part of any trauma and orthopaedic department. One must consider the benefits of providing such a service and routine audits are necessary to improve the efficiency, accuracy and above all, best possible patient care.
Although there is evidence that simple fracture like buckle fractures of the distal radius do not need orthopaedic input and can be safely treated in emergency department using a splint, and can be discharged without follow up 4, concerns have been raised against the possibility of a misdiagnosis and providing patient information 5.
Radiographic interpretation is often done by junior doctors in the emergency department. Guly6 demonstrated that there is significant issue in misreading radiographs and missing the injuries. The second problem was noted to be not requesting a radiograph. It has been suggested that better training in interpreting radiographs and rapid reporting by radiologist could solve this problem.
Others have adapted local departmental audits and guidelines and have shown to reduce the risk.7
Another possibility is a rapid review of radiographs by orthopaedic consultants on the same day as suggested by Beiri et al.8 But if the hospital is covering a large population area including peripheral walk in centres, this becomes difficult due to accessibility and logistic reasons.
Toeh and collegues9 investigating attitudes of parents towards paediatric fracture clinic found that mothers were the one who predominantly accompanied their children and most children had to take time off school to attend the clinic. It was also interesting to note that parents perception of severity of injury prompted attendance at follow up clinics.
In another study, ninety nine per cent of the parents thought attendance at the fracture clinic was important. However, when evaluating the socio economic costs, they found that this led to loss of 0.25 working days of parents, 0.18 daily wages and 0.54 schooling days per visit.1
A combination of factors may lead to fracture clinic appointments especially in paediatric population. Departmental protocols and guidelines may help in reducing the fracture clinic visits, however careful consideration must be given while drawing up these for a successful outcome.
Inappropriate referrals lead to usage of time and resources, which can lead to delay of service meant for those in need of specialist opinion. In our audit, 34% of the cases seen did not have any fractures and 5% were suspected fractures.
One of the drawbacks of this audit includes lack of case note review of those cases where fractures were not present. It would have been ideal to investigate the nature of cases seen, and whether this was treated as soft tissue injuries, or seen just for reassurance or used as a safety net.
The following recommendations could be used as possible solutions to decrease inappropriate referrals to fracture clinic.
If the patient is seen in Accident and Emergency (A&E), where appropriate and when diagnosis is in doubt, there should be an opportunity for the patient to be seen or discussed with a more senior doctor in A&E.
With regards to Peripheral Walk In Centres, there should be an opportunity to discuss it with the on call Orthopaedic team with the integration of PACS, so that images are readily available for viewing, and to consider rapid reporting of images.
The use of Specialist Physiotherapists for soft tissue injuries in A&E with follow up in physiotherapy clinics were shown to have high patient satisfaction rates and reduce fracture clinic follow up. Similar strategy could be considered.11, 12
Conclusion
This study has shown that although the majority of patients needed treatment, a significant number (34%) did not have fractures. Considerable amounts of time can be saved, especially in a busy fracture clinic if unnecessary appointments could be avoided. It would also benefit patients by avoiding unnecessary visits to the fracture clinic. A repeat study following the consideration of recommendations would reveal any benefit of such a strategy.
There are approximately over 1.6 billion overweight people with a body mass index (BMI) greater than 25 kg/mAnnually, around 2.8 million deaths are attributed to overweight and obesity worldwide(1). Many overweight individuals underestimate their weight and despite acknowledging their overweightness, many are not motivated to losing weight(2).Accurate measurement is important as it identifies patients with diagnoses which subsequently impact on their management. Self-reported weight is often used as a means of surveillance but has been shown to bias towards under reporting of body weight and BMI as well as over reporting on height(3). Several estimation techniques has been devised to quantify anthropomorphic measurements when actual measurement cannot take place(4),(5),(6), however, these methods are associated with significant errors for hospitalised patients(7). There is no published study that questions the validity of visual estimation of obesity in daily clinical setting despite its relevance to the daily practice. We aim to investigate the accuracy of visual estimation compared to actual clinical measurements in the diagnosis of overweight and obesity.
Methods:
This is a case control study. Patients for this study were attending the endocrinology, cardiology and chest pain out-patient clinic in Cork University Hospital, Cork, Ireland. The questionnaire session was carried out at every endocrinology, cardiology and chest pain clinic for 5 consecutive weeks. A total of 100 patients were recruited allowing for a 10% margin of error at 95% confidence level in a sample population of 150 000. Ten doctors of varying grades were chosen randomly to visually score the subjects. Exclusion criteria included patients who were pregnant and who are wheelchair bound. Consent was obtained from patients prior to filling questionnaires. Ethical approval was received from the Clinical Research Ethics Committee of the Cork Teaching Hospitals.
In the waiting room, patients were asked to self- report their weight, height and waist circumference to the best of their estimate. Demographics and cardiovascular risk were obtained from medical charts and presented in Table 1. The questionnaires have a section that specifically tests patients’ awareness of abdominal obesity and patients were asked to choose between obesity and abdominal obesity, relying on their own knowledge of markers of cardiovascular risks. Clinical measurements were taken in the nurses’ assessment room. Weight was measured by using portable SECA scales (Seca 755 Mechanical Column Scale) and was measured to the nearest 0.1kilogram. All patients were measured on the same weighing scale to minimize instrumental bias. Patients were asked to remove their heavy outer garments and shoes and empty their pockets and to stand in the centre of the platform, so that weight is distributed evenly to both feet.
Height was measured by using a height rule attached to a fixed measuring rod (Seca 220 Telescopic Measuring Rod). Patients were asked to remove their shoes and are asked to stand with their back to the height rule. It was ensured that the back of the head, back, buttocks, calves and heels are touching the wall. Patients were asked to remain upright with their feet together. The top of the external auditory meatus is leveled with the inferior margin of the bony orbit. The patients were asked to look straight. Height is recorded to the resolution of the height rule (i.e. nearest millimeter).
Waist circumferences were measured using a myotape. Patients were asked to remove their outer garments and stand with their feet close together. The tape is placed horizontally at a level midway between the lower rib margin and iliac crest around the body. They were then asked to breathe normally and the reading of the measurement was taken at the end of gentle exhaling. This prevents patients from holding their breath. The measuring tape is held firmly, ensuring its horizontal position and loose enough that it allows placement of one finger between the tape and the subject's body. A single operator who has been trained to measure waist circumference as per the WHO guidelines is used repeatedly in order to reduce measurement bias(8).
The doctors were asked to visually estimate the patients' weight, height, waist circumference and BMI. The estimation is recorded on a separate sheet. All doctors were blinded to the actual clinical measurements. The questionnaires were then collected at the end of the clinic and matched to individual patients. Data entry was performed in Microsoft Excel and exported for statistical analysis on SPSS version 16.
Findings
The study enrolled 100 patients. Demographic and cardiovascular risk details are shown in Table 1. Among these, 42 were obese, 35 were overweight and 23 patients had a normal BMI. The sample has a mean BMI of 29.9kg/m2 (95% CI 28.7-31.1) with a mean waist circumference (WC) of 103.2cm (95% CI 100.7-107.2). The average male waist circumference is 105.8 cm while the average female waist circumference is 101.6cm. The mean measured weight was 84.6kg (95% CI 81.0-88.2) and the mean height measurement was 1.68m (95% CI 1.66-1.70).
Table 1: Cardiovascular risk factors
Sex
Male(n=55)
Female(n=45)
Mean age
53.6(19-84)
56.7(23-84)
Diabetes
17
14
Hypertension
16
20
Hypercholesterolaemia
24
19
Active smoker
10
5
Ex- smoker (>10years)
8
3
Previous stroke or heart attack
6
6
Previous PCI
6
3
Patient’s perception and doctor’s estimation of anthropomorphic measurements were compared to actual measurements and is displayed in Table 2.
Table 2. Deviation from actual measurement values in both groups
Patient’s Estimation
Mean estimated
Mean deviation (estimated – actual measurements)
95% Confidence interval of Mean Deviation
Weight
81.16
-3.71
-5.10 to -2.32
Height
1.6782
0.0039
-0.0112 to 0.0033
Waist
90.85
-13.09
-15.48 to -10.70
BMI
28.68
-1.24
-1.87 to -0.61
Doctor’s visual estimation
Weight
80.85
-3.78
-5.54 to -2.02
Height
1.6710
-0.0113
-0.224 to 0.002
Waist
92.10
-11.84
-13.87 to -9.81
BMI
29.08
-8.47
-1.54 to -0.15
In terms of patients own estimation of height, weight and waist circumference, 49% of patients under estimated their weight by up to 1.5kg, 35% reported accurately to 1.5 kg and 16% over reported weight. 67% of patients estimated height accurately, 18% of patients under-estimated, and 15% over-estimated. When asked to estimate their waist circumference, 68% of patients under estimated by up to 5cm, 30% over estimated and 2 patients estimated accurately to 5cm (Figure 1). We found that 70% of patients regarded obesity as the higher threat to health compared to abdominal obesity. There were no differences in patient’s self reported weight and doctor’s weight estimation (p= 0.236).
Figure 1. Graphical representation of patients estimated weight, height and waist circumference
We then analysed the doctor’s estimation of height, weight, waist circumference and BMI. For the purpose of interpreting the data on BMI, the estimates that is recorded by doctors that matches the patient’s real BMI by clinical measurement is considered accurate. Therefore, for patients who have a normal BMI, 69.5% were correctly estimated as normal and the rest (30.5%) were estimated as overweight. For those patients who are obese, 81% were estimated as obese and by the doctors as a group and the rest (19%) is estimated to be overweight. In patients who are overweight, 63% were correctly estimated as being overweight by doctors, 9% were estimated as being obese and the rest (28%) were mistakenly estimated as having a normal BMI. Accurate BMI estimation by doctors was achieved in 72% patients (Figure 2).
Figure 2. Doctors estimation of BMI compared to actual clinical measurement
Doctors were noted to underestimate the patients’ weight in 53 patients, over estimated in 26, while being accurate in their estimation in 21 patients. Estimation of waist circumference to the nearest 5 cm shows marked under estimation of waist circumference in 71% of patients, over reporting in 3% of patients and 26% accurate estimation. The majority of underestimation of waist circumference happens in the region of 10 to 15cm. For patients who are obese, doctors were able to estimate waist circumference correctly in 58% of obese individuals.
Discussion:
This is the first study demonstrating the relationship of visual estimation of a cardiovascular risk factor and comparing to actual clinical measurements. As obesity and abdominal obesity becomes an increasingly common phenomenon, our perception of the 'normal' body habitus may be distorted(9).
It is observed that in the bigger hospitals out-patient departments, physicians and nurses are commonly affected by clinical workload and tend to spend a limited amount of time with patients in order to achieve a quicker turnaround time. Cleator et al looked at whether clinically significant obesity is well detected in three different outpatient department and whether they are managed appropriately once diagnosed(10). In all the outpatient departments involving the specialties of rheumatology, cardiology and orthopedics, the actual cases of clinical obesity is higher than what is being diagnosed and the management of obesity was heterogeneous and minimal in terms of intervention. With the ever increasing obese patients attending hospitals, it is understandable that healthcare providers such as physicians, nurses, dietician and physiotherapist resort to relying on visual estimation.
In terms of patient’s own estimation of height, weight and waist circumference, we gained that patients were reasonably good at estimating their own height but tend to under estimate weight. This is probably due to the fact that these patients have not had a recent measurement of weight and their weight estimation is based on previous historical measurement from months to years back, which in the majority of people, is less than their current weight. This also explains why their height estimation is more accurate, as adult heights do not undergo significant changes and are relatively constant.
When attempting to obtain patient’s own estimation of waist circumference, we found that most patients are not at all aware of the method used to measure waist circumference. Some patients even mistaken waist circumference as being their trousers’ waist size. In those who were able to give estimation, a large proportion would under estimate.
The majority of patients think that general obesity is more predictive of cardiovascular outcome compared to abdominal obesity. This lack of awareness is reflective on clinician’s effort in addressing abdominal obesity as an important cardiovascular risk factor to patients during consultations. The lack of proper awareness campaign by healthcare providers along with the evolving markers of cardiovascular risk may further confuse the general public.
Recently, waist circumference, waist to hip ratio along with many serum biomarkers have been noted to correlate to adverse outcomes in obese individuals, independent of BMI. Waist circumference measurement is a relatively new tool compared to the measurement of BMI. This would explain the discrepancy between doctors’ estimation of BMI and waist circumference. Visual estimation is further compromise as many patients would be covered in items of clothing during consultations. In order to obtain a better estimation of waist circumference, the individual have to be observed from many angles, a task that may be impossible in a busy clinic.
Although BMI is a convenient method to quantify obesity, recent studies have shown that waist circumference is a stronger predictor of cardiovascular outcomes(11),(12),(13),(14).The importance of waist circumference in predicting health risk is thought to be due to the relationship between waist circumference and intra-abdominal fat(15),(16),(17),(18),(19),(20).We now know that the presence of intra-abdominal visceral fat is associated with a poorer outcome in that patients are prone to develop metabolic syndrome and insulin resistance(21).We have yet to devise a more accurate measurement on visceral fat and at present limited to using waist circumference measurements.
Although doctors are generally good at BMI estimation, we found that in estimating overweight patients’ BMI, close to 30% were wrongly estimated as having normal BMI. Next to the obese, these groups of patients are likely to have metabolic abnormalities and increased cardiovascular risk. If actual measurement of BMI is not routinely done, we may neglect patients who would benefit from intervention. A simple, short counseling during the outpatient visit with emphasis on weight loss, the need to increase their daily activity levels and the morbidity related to being overweight may be all that is needed to improve the population health in general. Further intervention may include referrals to hospital or community dieticians and prescribed exercise programmes. These intervention tools already exist in the healthcare system and could be accessed readily.
The nature of our study design exposes it to several potential selection and measurement biases. Future studies should include patients of differing ages and socioeconomic background. Additionally, clinicians of differing appointments from various different specialties should be included to obtain a more applicable result. A measure of diagnostic efficacy should also be employed to further assess the value of clinical measurement and therapeutic intervention.
Conclusion:
The appropriateness of visual scoring of markers of obesity by doctors is flawed and limited to the obese individuals. True anthropometric measurements would avoid misdiagnosing overweight individuals as normals. We can conclude that patients’ own estimation of weight is unreliable and that they are unaware of the impact of high abdominal fat deposition on cardiovascular outcome. The latter should be addressed in consultations by both hospital physicians and general practitioners. Further emphasis and education in schools and awareness campaigns should also advocate this emerging cardiovascular risk factor.
Ganglioneuroma is a rare, benign, neuroblastic tumour that originates from the neural crest cells. Ganglioneuroma, ganglioneuroblastoma and neuroblastoma are three maturational manifestations of a common neoplasm in the progressive order of loss of differentiation. Ganglioneuromas may be found anywhere along the line of the embryonic neural crest, from clivus to sacrum and are very rare in the pelvis. Less than twenty cases have been described in the literature with various presentations based upon location including extradural, retroperitoneal, spinal, thoracic and one solely intradural medullary location. Ganglioneuromas may stay asymptomatic for a long period and give rise to no pressure symptoms either due to slow growth leading to progressive increase in size accompanied by adaptive changes. Ganglioneuromas demonstrate long-term disease-free survival even with an incomplete surgical removal. Here we present a case of a girl aged 11 years with pelvic ganglioneuroma.
Case Report:
A girl aged eleven years was brought from a remote hilly area in Pakistan by her mother to the city hospital many miles away. She had noticed that her daughter’s lower abdomen had progressively enlarged over last few months. Her menstrual cycle was normal so the mother was concerned that despite not being pregnant, her daughter had a distended abdomen as if she was pregnant She had a good appetite and unaltered bowel and bladder function. She had no heartburn, regurgitation, nausea, vomiting, heamatemesis or melaena. She denied any bleeding par rectum, shortness of breath, cough, loss of consciousness or convulsions. Her past medical history was mundane. She had not had any surgery in the past and was not taking any medication. Examination revealed a smooth, large, fixed hard mass in the right lower abdomen and pelvis. It was palpable in the pelvis on rectal examination which was otherwise normal. Liver or spleen was not palpable and she had no ascites. Her chest was clear, heart sounds were normal and there were no neurological abnormalities. Laboratory tests including FBC, LFT, U&E and Creatinine were normal. Her MRI scan was not of a good quality due to limitation of resources and technology at place of her diagnosis, but it showed an 11.4 x 11.8cm solid, well-defined mass arising from pelvis and extending up to the umbilicus. The mass showed intermediate low signals on T1 and hyper intense signals on T2 images (Fig. 1). Mid line surgical exploration was undertaken which showed a large, solid, retroperitoneal mass arising from sacral nerves within the pelvis. Mass was lying in front of great vessels, overlapping the confluence of common iliac vessels. The left ureter was displaced laterally while the right ureter was lying over the mass. The mass was excised completely. Post operative course was uneventful and patient was discharged home on the fifth post operative day.
Figure1: MRI showing showing a large soft tissue mass.
Macroscopically, the specimen was a 13x13x5cm rounded well-encapsulated mass (Fig. 2). Upon sectioning in vitro, mass was seen to be solid, whorled and grey white. Microscopically, groups and singly scattered ganglion cells were seen with surrounding neural tissue. There was no evidence of atypia, mitosis or necrosis. Features were suggestive of a ganglioneuroma. (Figure 3) The patient was well at two months follow up and required no further treatment.
Figure2: Photograph of the resected specimen shows a well-encapsulated ovoid mass.
Neuroblastoma, ganglioneuroblastoma and ganglioneuromas are tumours of sympathetic nervous system that arise from the neural crest cells.1 These tumours differ only in their progressive degree of cellular and extracellular maturity, with ganglioneuroma being the most mature hence well differentiated and neuroblastoma being the least.2 Ganglioneuroma are rare, benign and slow growing. They may occur spontaneously or as a down grading from therapy for Neuroblastoma with either chemotherapy or radiation.3 International Neuroblastoma Pathology Classification (INPC) has been devised after studying 552 such tumours. Out of 300 with favourable prognosis three groups were identified as; ganglioneuroma maturing (GN-M), ganglioneuroblastoma intermixed (GNB-I) and ganglioneuroblastoma nodular with favourable subset (GNB-N-FS). These are resectable in 91% cases in one or more surgical sessions. In contrast, the remaining 252 tumours had unfavourable prognosis and were called ganglioneuroblastoma nodular unfavourable subset (GNB-N-US). This group was not amenable to surgical resection and usually already had metastasis at the time of presentation.4
Ganglioneuromas although are mostly sporadic, may be associated with Neurofibromatosis (Von Recklinghausens Disease) and Multiple Endocrine Neoplasia type II (MEN).1 Ganglioneuroma usually presents before the second decade and rarely after the sixth.2 The median age at diagnosis has been reported to be approximately 7 years. There is a slight female preponderance.5 The common locations are the posterior mediastinum, and the retroperitoneal space. Retroperitoneal pelvic location is very rare and only few case histories have been reported.1
Although retroperitoneal ganglioneuromas are usually asymptomatic, some patients may get compression symptoms, diarrhea, hypertension, virilization and myasthenia gravis owing to release of certain peptides.1 Radiological examination may localize the lesion. MRI may show low intensity on T1-weighted images and heterogenous hyper intensity on T2-weighted images with gradual increasing enhancement on dynamic images.6
Surgical excision is sufficient for treatment of ganglioneuromas. Chemotherapy or radiotherapy has no role in the treatment. Even with an incomplete excision, close follow up alone may be adequate. If any progression of the tumour is seen then repeat laparotomy may be indicated.2
Conclusion:
Although pelvic ganglioneuroma is a very rare lesion, it should be considered in the differential diagnosis of any abdomeno-pelvic mass. As it is a slow growing tumour, gross total surgical removal with preservation of organ function is a feasible surgical option.
The widespread use of office-software in general practice makes the idea of simple, automatic computerised support an attractive one. Different tools for different diseases have been tested with mixed results, and in 2009 a Cochrane review1 concluded that “Point of care computer reminders generally achieve small to modest improvements in provider behavior. A minority of interventions showed larger effects, but no specific reminder or contextual features were significantly associated with effect magnitude”. One year later another review2 reached similar conclusion: “Computer reminders produced much smaller improvements than those generally expected from the implementation of computerised order entry and electronic medical record systems”. Despite this, simple, non-expensive, automatic reminders are frequently part of GPs’ software, even if their real usefulness is seldom tested in real life.
Repeated hospitalisation for heart failure is an important problem for every National Health System; it is estimated that about half of all re-hospitalisation could be avoided3. Adherence to guidelines can reduce re-hospitalisation rate4, and pharmacotherapy according to treatment guidelines is associated with lower mortality in the community5. In 2004 a software commonly used in Italian primary care implemented a simple reminders’ system to help GPs to improve prescription of drugs recommended for heart failure. We evaluated if this could lead to a decrease in re-hospitalisation rate.
METHODS
In 2003, using Millewin ®, a software commonly used by Italian GPs, we showed that appropriate prescription could increase using a simple pop-up reminders6; a year later, using the Italian General Practitioners database ‘Health Search – CSD Patient database (HSD) (www.healthsearch.it), we observed a lower than expected prevalence of codified diagnosis of heart failure and of prescription of both beta-blockers and ACE-Inhibitors/ARBs (data on file). Therefore in 2004 Millewin® embedded a simple reminder system to help heart failure (HF) management. The first reminder aimed to identify patients with HF, but without codified diagnosis: in case of loop diuretic and/or digoxin prescription without codified HF diagnosis a pop-up told the GP that the patients could be affected by HF and invited the physician to verify this hypothesis and eventually to record the diagnosis. The second reminder appeared when a patient with codified HF diagnosis had no beta-blocker and/or ACE-inhibitor/ARB prescription: a pop-up invited the GP to prescribe the missing drug. This reminder system was already activated in the 2004 release of the software, but required voluntary activation in the successive releases. This is a common choice in real life, where positive choices in clinical practice by software-house neither are welcomed nor accepted by GPs. We had no possibility to know who decided to keep using the reminders.
We examined the 2004-2009 HF hospitalisations in Puglia, a Southern Italian Region with a population of over 4000000, and with high HF hospitalisation rate compared with the Italian mean7. We compared the hospitalisations for patients cared for by GPs who used Millewin® in 2004 to those of the patients cared for by GPs who never used Millewin®. Data were provided by the local Health Authority, and were extracted from the administrative database.
RESULTS
We identified 64591 patients (mean age 76 y, sd 12; 49.9% men) with one or more HF hospitalisation; 17810 had > 2 hospitalisations, and were analysed for the current study.
Figure 1 - Selection process leading to the identification of the patients with > 2 HF hospitalisations
The selection that led to this group is summarised in figure 1. There was no statistically significant difference between patients cared for GPs using or non using Millewin® software as far as age and gender are concerned. The re –hospitalisation rate according to the use or non-use of Millewin® of patients’ GPs is summarised in table 1.
Table 1: Re-hospitalisation rate of patients cared by Millewin® users and non-users
Patients with ≥ 2 hospitalisation (N, %)
Time
No MW users
MW users
Total
P
Within 1 year
11260 (23.1%)
1136 (22.9%)
12396 (23.1%)
=N.S.
Within 2 years
13851 (28.4%)
1410 (28.4%)
15261 (28.4%)
=N.S.
Within 3 years
15144 (31.0%)
1543 (31.1%)
16687 (31.0%)
=N.S.
Within 4 years
15803 (32.4%)
1612 (32.4%)
17415 (32.4%)
=N.S.
Within 5 years
16083 (33.0%)
1643 (33.1%)
17726 (33.0%)
=N.S.
Within 6 years
16156 (33.1%)
1654 (33.3%)
17810 (33.1%)
=N.S.
MW = Millewin®, N.S = Not significant
The mean time before the first re-hospitalisation was 108.5 day +/- 103.3 for Millewin® non-users and 116.4 +/- 107.5 for users (p < 0.05).
DISCUSSION
Even if reasonable and clinically sound, the availability of computerised reminders aimed to help GPs to identify HF patients and to prescribe them with recommended drugs didn’t reduce re-hospitalisation rate. The first possibility to explain this result is that, after the first year, GPs didn’t re-activate the reminders’ system. Unfortunately we couldn’t verify this hypothesis, but it is known that the level of use of such a system may be low in usual care8; furthermore providers may agree with less than half of computer generated care suggestions from evidence-based CHF guidelines, most often because the suggestions are felt to be inapplicable to their patients or unlikely to be tolerated9. Epidemiological studies have shown that heart failure with a normal ejection fraction is now a more common cause of hospital admission than systolic heart failure in many parts of the world10-11. Despite being common, this type of heart failure is often not recognised, and evidence based treatment—apart from diuretics for symptoms—islacking12. It is therefore possible that increasing ACE-I/ARBs and beta-blockers use in these patients doesn’t influence the prognosis and hospitalisation rate. Unfortunately administrative databases do not permit to distinguish the characteristic of HF. We must also consider that the use of appropriate drugs after HF hospitalisation could spontaneously increase in the last years; a survey in Italian primary care showed that 87% of HF patients used inhibitors of the renin-angiotensin system, and 33% beta-blockers13. A further relevant increase in ACE-I/ARBS is therefore unlikely, while a improvement is clearly needed for beta-blockers. Could more complex and information-providing reminders be more useful? This is unlikely since adding symptom information to computer-generated care suggestions for patients with heart failure did not affect physician treatment decisions or improve patient outcomes14. Furthermore, consultation with a cardiologist for starting beta-blocker treatment is judged mandatory by 57% of Italian GPs13, thus reducing the potential direct effect of reminders on prescription. Finally we must remember that part of the hospitalisation due to HF worsening can be due to non-cardiac disease, such as pneumonia, anemia, etc; all these cause cannot be affected by improved prescription of cardiovascular drugs.
Albeit simple and inexpensive, computerised reminders aren’t a neutral choice in professional software. Too many pop-ups may be disturbing and may lead to systematic skipping the reminders’ text. This can be a problem, since computerised reminders have proved to be useful for other important primary-care activity, such as preventive interventions15. In our opinion, at the moment, a computerised reminder-system should be proposed only as a part of a more complex strategy, such as long-term self or group audit and/or pay for performance initiative.
CONCLUSIONS
Availability of computerised automatic reminders aimed to improve detection of heart-failure patients and prescription of recommended drugs doesn’t decrease repeated hospitalisation; these tools should be probably tested in the context of a more complex strategy, such as a long-term audit.
The prevalence of current use of alcohol in India ranged from 7% in western states of Gujarat (officially under prohibition) to 75% in the North eastern state of Arunachal Pradesh 1.The prevalence of hazardous use of alcohol was 14.2% in rural south India2. Thus, alcohol abuse has a major public, family and health related problems withimpairment of social, legal, inter personal and occupational functioning in thoseindividuals who have been addicted to alcoholism.
A wide variety of biochemical and haematological parameters are affected by regular excessive alcohol consumption. The blood tests traditionally used most commonly as markers of recent drinking are the liver enzymes, gamma glutamyltranserase (GGT), aspartate aminotransferase (AST) and alanine aminotransferase (ALT), and the mean volume of the red blood cells (mean corpuscular volume (MCV). But they were not sensitive or specific enough for use as single tests3.
Elevated Gamma glutamyltransferase levels are an early indicator of liver disease; chronic heavy drinkers, especially those who also take certain other drugs, often have increased GGT levels. However, GGT is not a very sensitive marker, showing up in only 30–50 percent of excessive drinkers in the general population. It is not a specific marker of chronic heavy alcohol use, because other digestive diseases, such as pancreatitis and prostate disease, also can raise GGT levels 4.
AST and ALTare enzymes that help metabolize amino acids, the building blocks of proteins. They are an even less sensitive measure of alcoholism than GGT; indeed, they are more useful as an indication of liver disease than as a direct link to alcohol consumption. Nevertheless, research finds that when otherwise healthy people drink large amounts of alcohol, AST and ALT levels in the blood increase. Of the two enzymes, ALT is the more specific measure of alcohol-induced liver injury because it is found predominantly in the liver, whereas AST is found in several organs, including the liver, heart, muscle, kidney, and brain. Very high levels of these enzymes (e.g., 500 units per liter) may indicate alcoholic liver disease. Clinicians often use a patient’s ratio of AST to ALT to confirm an impression of heavy alcohol consumption. However, because these markers are not as accurate in patients who are under age 30 or over age 70, they are less useful than some of the other more comprehensive markers5.
AST /ALT ratio of more than1.5 strongly suggests and ratio >2.0 is almost indicative of alcohol induced damaged to liver 6.It has been suggested that an AST/ ALT ratio greater than 2 is highly suggestive or indicative of alcoholic etiology of liver disease. But extreme elevations of this ratio, with AST level greater than five times the normal should suggest non-alcoholic cause of hepatocellular necrosis 7.
Sialic acid, which is a derivative of acetyl neuraminic acid, attached to non-reducing residues of carbohydrate chain of glycoproteins and glycolipids is found to be elevated in alcohol abuse 8.
In this study we compared sensitivity, specificity and diagnostic efficiency of serum Sialic acid with other traditional markers like AST (Aspartate amino transaminase), ALT (Alanine amino transaminase), GGT (Gamma Glutamyl Transferase), as a marker of alcohol abuse.
MATERIALS AND METHODS:
This was a case-control study which was conducted on 100 male subjects aged 20-60 years, 50 cases and 50 controls. Cases comprised of patients diagnosed to have Alcohol Dependant Syndrome (ADS) who were admitted in Psychiatry-ADS ward, at Mahathma Gandhi Memorial Hospital,Warangal. Study was approved by the Institutional ethical committee. Amount, duration and the type of alcohol in the form of Rum, Whisky, Brandy, Vodka, Gin, Arrack, etc consumed was enquired, those subjects who consumed more than half bottles of these spirits daily (or intermittently with abstinence of 2-3 days), for more than 5 years were chosen for this study. Dependence of their alcoholism was enquired in the form of CAGE questionnaire 9.
C : Cut down drinking, A : Annoyed others by drinking, G : Guilty feeling of drinking. E : Eye-opener
Those who satisfied two or more questions were taken as cases 10 and their blood samples were collected for the study after their informed consent. Controls were selected from healthy subjects came for master health check up at MGMH health clinic, with no history ofalcoholism.
Exclusion criteria:
Patients with history of Diabetes mellitus, Cardiac disease, Viral/Bacterial Hepatitis, Alcoholic hepatitis, tumors, meningitis and history of current use of hepatotoxic and nephrotoxic drugs were excluded from the study.
4ml of blood was collected from each subject from median cubital vein by venipuncture, serum was separated and the different parameters were analyzed. Estimation of serum Sialic acid was done by modified thiobarbturic acid assay of warren11 (Lorentz and Krass) by colorimetric method. Estimations of Aspartate transaminase 12, 13, 14 Alanine transaminase 13, 15, 16 Gamma glutamyl transferase 17, 18 were done by IFCC recommended methods on Dimension Clinical chemistry system (auto analyzer).
Statistical analysis: Student t test (two tailed, independent) has been used to find the significance of study parameters between controls and cases. Receiving Operating Characteristics (ROC) tool (SPSS 17 version) has been used to find the diagnostic performance of study parameters.
RESULTS:
It was observed that all the study parameters were significantly increased (p value < 0.001) in subjects with alcohol abuse when compared to the controls as shown in the Table 1. The ROC analyses of the different parameters were shown in Fig 1 and Table 2. GGT was having highest Diagnostic efficacy followed by AST and SA as a marker of alcohol abuse.
Figure 1: ROC Curve analysis of different parameters
Table1: Comparison of study parameters between controls and cases
Parameters
controls
cases
P value
AST(U/L)
24.83±7.57
87.9 ±53.72
<0.001
ALT(U/L)
47.63 ±18.77
88.83± 46.53
<0.001
AST/ALT
0.58 ± 0.23
0.982 ± 0.29
<0.001
GGT(U/L)
39.36 ±v 20.23
264.13± 298.74
<0.001
SA(m mol/L)
1.81 ± 0.42
2.92±0.706
<0.001
Table 2: ROC Analysis of different study parameters
Parameters
Best-Cutoff value
Sensitivity
Specificity
Diagnosticefficacy
AUC
AST(U/L)
37.50
86.66 %
93.33%
90%
0.946
ALT(U/L)
71.00
63.33%
93.33%
78.33%
0.811
AST/ALT
0.732
83.33%
76.66%
80%
0.869
GGT(U/L)
55.50
96.66%
86.66%
91.66%
0.929
SA(m mol/L)
2.3
80%
93.33%
86.66%
0.939
DISSCUSSION:
Alcoholism is a serious health issue with major socio-economic consequences. Significant morbidity is related to chronic heavy alcohol use and alcoholics seek advice only when a complication of drinking sets in. The diagnosis is often based on patients self-reporting of alcohol consumption, which is unreliable and requires high degree of clinical suspicion.
Clinical histories and questionnaires are the commonest initial means of detection of alcohol abuse. They are cheap, easily administered but are subjective. If the history remains uncertain and there is suspicion of alcohol abuse, biological markers provide objectivity. A combination of markers remains essential in detection. Liver is the prime target organ for alcohol-induced disease. Liver enzymes are also important indicators of liver dysfunction, possibly as markers of alcohol dependence. Commonly used markers are GGT, AST and ALT. Laboratory markers help clinicians to raise the issue of excessive drinking as the possible cause of health problem, unfortunately because of lack of sensitive and specific methods, the detection of problem dinking in clinical settings has remained difficult. Therefore, findings of increased serum SA concentrations in alcoholics have raised the possibility of developing new tools for such purpose.
In the present study on analyzing the results it was found that an increased concentration of Serum Sialic acid and other traditional biochemical markers GGT, AST, ALT was observed in cases compared to that of controls. Over all GGT had a good sensitivity and specificity. The other traditional markers used in alcohol abuse varied considerably in their specificities and sensitivities. The increase in serum Sialic acid concentration in alcohol abusers in our present study is in accordance with the studies conducted by other investigators 8, 19, 20, 21.The diagnostic accuracy of SA was in accordance with the study by Antilla P et al 19 .The increase in serum GGT, ALT and AST concentration in alcohol abusers were in accordance with the studies conducted by other investigators 19, 22.
CONCLUSION:
In our study, Sialic Acid proved to be a good test with sensitivity of 80% and specificity of 93.33% with a diagnostic accuracy of 86.66% showing that SA can be used as a biochemical marker in alcohol abuse where secondary effects of liver disease hamper the use of traditional markers.
Limitations of the study are as follows: This study was done in small group of people only; a larger study consisting of alcohol abusers with and without specific liver disease should be conducted to confirm the role of SA as a new marker for alcohol abuse where the traditional markers will be altered by the different liver diseases.
Majority of pancreatic tumours are of primary pancreatic origin. Nevertheless a multitude of extra pancreatic cancers can metastasize to the pancreas and may present a diagnostic and management dilemma. Our case demonstrates such a problem in a patient with a pancreatic lesion.
Case report
A 82 year old man was referred to our hospital with computed tomogram (CT) scan showing a hypodense lesion in the pancreas. He had an anterior resection done 5 years prior for a Duke’s B (pT3N0M0) colon cancer. He did not receive any post-operative chemotherapy or radiotherapy. Carcinoembryonic antigen (CEA) levels was normal. He underwent an MRI scan (Figure 1) of his abdomen which reported a 2.8cm ring enhancing lesion in the tail of pancreas. At endoscopic ultrasound (EUS) a 2 x 2 cm well circumscribed mass was demonstrated in the tail of the pancreas close to the splenic artery but, not involving the vessel. Fine needle aspiration (FNA) of the lesion demonstrated a poorly differentiated mucin secreting adenocarcinoma. Immuno-histochemical staining was strongly positive for CK 20 but, CK 7 was only weak focally positive (Figure 2) thus, suggesting metastasis to the pancreas from a colonic primary as opposed to a primary pancreatic malignancy.
The patient was given an option to undergo subtotal pancreatectomy or consider palliative chemotherapy. The patient chose neither and was discharged home with input from the Macmillan team.
Figure 1: MRI after gadolinium showing a ring-enhancing lesion in the tail of pancreas.
Figure 2: (a) Fine needle aspirate on liquid based cytology (x 400) shows irregular distribution of cells with nuclear palisading and pleomorphism. Immunocytochemistry performed on cytology smear shows (b) strong positivity for CK 20 (c) negative for CK7 and (d) focal positivity for CA19.9.
Discussion:
The pancreas is an uncommon site of metastasis from other primary cancers.1 Most of the space occupying lesions seen in the pancreas on imaging are of primary pancreatic origin.1, 2 Adsay, et al 2 performed analyses on surgical and autopsy database in 2004 and found that amongst a total of 4955 adult autopsies and 973 pancreatic specimens at surgery; the prevalence of different metastatic tumours to the pancreas was only 1.6% of all examined autopsy cases and 3.9% of pancreatic resections.
A study from Japan found that the commonest primary malignancies to metastasize to the pancreas were from the stomach, lung and bile duct in that order.3 Other primary tumours that have been reported to metastasize to the pancreas include renal cell carcinoma, lung, breast, small bowel, colon, rectum and melanoma.4, 5 Several mechanisms for development of pancreatic metastases (particularly from colorectal cancer) have been described: transfer via the lymphatic system, metastases from peritoneal carcinomatosis, and/or transfer via the haematogenous system. 6 Direct invasion of the pancreas by the primary tumour was also noted to be a method of spread from bile duct and gastric malignancies.3
CT scan is often unhelpful in differentiating primary from secondary pancreatic lesions. Pancreatic metastasis can present as solid or cystic structures, hypodense or hyper dense lesions.7, 8 A series by Klein, et al in which the CT features of pancreatic tumours are described suggested that multiplicity of tumours and/or hypervascularity were characteristic of secondary pancreatic tumours.9 A recent study has suggested that Positron Emission Tomogram (PET) is a more sensitive investigative tool than CT in detecting metastatic colorectal cancer.10 Most patients (as in our unit) usually have EUS guided FNA or biopsy to arrive at a diagnosis.
The differential diagnosis of primary pancreatic cancer versus metastasis from other carcinomas may be difficult using common histopathological techniques.11 Immuno-histochemical staining is often helpful in differentiating primary from secondary pancreatic tumours. Sometimes staining by a combination of different antibodies helps to reach a diagnosis. In a survey of 435 cases, the expression of CK 7 was positive in 92% of pancreatic cancers but in only 5% of colon cancers. On the other hand CK 20 was positive in 100% of colon cancers and in only 62% of pancreatic cancers.12 Furthermore, CD X2 is frequently expressed in colorectal carcinoma but, rarely in pancreatic ductal adenocarcinoma.13
The choice between conservative chemotherapy and resection for solitary pancreatic metastasis from colorectal cancer is still undecided. The natural history of untreated patients with pancreatic metastasis from cancer of the colon or rectum is unknown and thus it is impossible to compare the survival rate of resected and unresected patients treated with chemotherapy.14 Researchers from John Hopkins have reported only 4 colon metastasis to the pancreas (0.6%) among 650 pancreatico-duodenectomy procedures performed in their institution from 1990 to 1996.15 Experience from an Italian centre14 published that metastasis to the pancreas was the indication for surgery in a total of 18 out of 546 pancreatic resections (3.2%) performed over 27 years and colorectal cancer was the primary tumour in 50% of those cases. The median survival time was 16.5 months (range 8 – 105 months) with no peri-operative mortality being reported. In another study, all symptomatic (pain or jaundice) patients experienced complete relief of symptoms after surgery and no one experienced obstructive jaundice or abdominal pain until tumour recurrence.16
Oncologists may argue that chemotherapy can offer the same results as pancreatic resection but with less morbidity. Unfortunately, there is paucity of data in medical literature on comparisons of outcomes associated with surgical and chemotherapeutic treatment. We agree with Sperti et al 14 that resection of pancreatic metastasis from colorectal cancer is a palliative procedure with long-term survival being an exceptional event.
Conclusion:
Our case demonstrates that differential diagnoses for pancreatic masses should always include metastasis to the pancreas from other tumours particularly, when there is a history of previous or concurrent non-pancreatic malignancy. When disseminated malignancy is not present an aggressive surgical approach may offer successful palliation of symptoms and have a role in the multidisciplinary management of metastatic malignancy.
A 67 year old Caucasian male presented to our institution with a one day history of uncontrollable movements. The patient was being evaluated by a psychiatrist, neurologist and a neuro-opthalmologist for a three month history of severe anxiety, gait instability and palinopsia, respectively. The patient had progressively worsened over the prior two weeks and at the time of presentation reported visual hallucinations with increased confusion. His involuntary movements escalated to the point where it appeared that he was having seizures.
His medical history was significant for gouty arthritis, hypertension, and major depression. His surgical history was notable for an open reduction and internal fixation of his left hip 6 years prior. There was no history of any blood or blood product transfusion. He was an insurance executive and did not have significant occupational exposures. His social and family history was unremarkable. His medications upon arrival included captopril, atenolol, bupropion, lamotrigine, clonazepam, folic acid, and ibuprofen.
On admission he was arousable, well nourished, afebrile, haemodynamically stable but disorientated. His cardiopulmonary, abdominal and integumentary examinations were unremarkable. Neurological examination was significant for bilateral symmetric hyperreflexia, diffuse cogwheel rigidity without a resting or postural tremor, and multi-focal dysrhythmic generalised myoclonus. No neck tenderness or nuchal rigidity was noted. A CT of the head without contrast in the causality department was negative for haemorrhage or any acute intracranial pathology.
His initial assessment showed confusion, hallucinations, myoclonus with medication induced delirium. Lewy body dementia, occipital lobe epilepsy, peduncular hallucinosis and prion disease were all considered in the differential diagnosis. On admission, laboratory data including a CBC, CMP, serum ammonia level, cardiac enzymes, urinalysis, and coagulation profile were unremarkable. A toxicology screen for illicit drugs and heavy metals was negative. The initial lumbar spinal fluid (CSF) analysis was only notable for mild proteinorachia of 57 mg/dL. Gram stain, India-ink stain, acid-fast bacilli (AFB), bacterial & fungal cultures, as well as PCR for viral nucleic acid (herpes, Varicella-Zoster, Epstein-Barr virus, arboviruses, and cytomegalovirus virus) were all negative. MRI with contrast was remarkable only for periventricular ischemic changes consistent with small vessel disease. The patient’s bupropion and lamotrigine were discontinued upon admission, and his clonazepam was increased with resolution of the myoclonus after 24 hours.
The admission EEG showed diffuse slowing with no epileptiform discharges or triphasic waves. Due to progressive neurologic deterioration, he was followed with serial electroencephalograms. On hospital day 5, he became unresponsive and the subsequent EEG revealed non-convulsive status epilepticus (NCSE). Temporary resolution of his seizures was achieved with lorazepam and pentobarbital infusions. After 3 days of almost complete suppression, the pentobarbital was discontinued without NCSE recurrence. On hospital day 15 the EEG again displayed NCSE. A ketamine drip was added to his drug regimen with only brief improvement. Pentobarbital had been restarted and progressively titrated up to the maximal dose without achieving burst suppression. Despite being on the maximal dose of pentobarbital, ketamine, valproic acid, levetiracetam, and topiramate he continued to display NCSE (Figure 1A).
At this point (hospital day 16), therapeutic hypothermia was initiated and continued for 48 hours. The patient’s core body temperature was maintained between 32-33 °C followed by slow rewarming to normothermia over the following 48 hours. Near complete suppression of epileptiform activity was observed on the EEG (Figure 1B). Ketamine and pentobarbital were successfully weaned off during the following days and phenobarbital was introduced without recurrence of NCSE.
Figure 1A. Electroencephalogram from hospital day 15. Refractory non-convulsive status epilepticus while on ketamine, levetiracetam, valproic acid, topiramate and pentobarbital.
Figure 1B. Electroencephalogram from hospital day 16, after the initiation of therapeutic hypothermia. Suppression of epileptiform activity is observed after treatment with therapeutic hypothermia; ECG artifact persisting.
Figure 1C. Electroencephalogram from hospital day 29, 13 days after treatment with therapeutic hypothermia, illustrating generalised periodic sharp wave discharges with lack of background activity. Occasional triphasic waves are noted consistent with Creutzfeldt-Jakob encephalopathy.
Figure 2. RepeatMRI of the brain on hospital day 21 illustrates asymmetric basal ganglia hyperintensities on diffusion weighted sequences, which are often observed in CJD.
Figure 3. H & E stain of the cerebral cortex with low and high magnification (A & B). Coarse and fine vacuolization with spongiosis (arrows) are demonstrated on H & E and silver stain, respectively (C & D).
The patient had a repeat MRI which showed asymmetric basal ganglia hyper intensity on diffusion weighted imaging sequence consistent with CJD 3 (Figure 2)3. The results of CSF analysis for protein 14-3-3, neuron-specific enolase, and tau protein became available on hospital day 22. Despite controlling the NCSE, the patient remained unresponsive over the course of the following weeks. The EEG pattern changed to generalised periodic sharp waves, 1-2 per second with occasional triphasic waves, and a lack of background activity (Figure 1C). After fully reviewing the results with the family, an open brain biopsy was performed in effort to verify the diagnosis. The biopsy confirmed the diagnosis of spongiform encephalopathy (Figure 3). In light of the findings, withdrawal of care was initiated upon the family’s request and the patient expired on hospital day 42. The patient’s estimated symptomatic clinical course was approximately four and one-half months.
DISCUSSION
Creutzfeldt-Jakob disease is the archetype of prion mediated neurodegenerative disorders. There are 4 types of CJD; sporadic, familial, iatrogenic and variant4. The sporadic type accounts for 85 per cent of all cases of CJD4. The diagnosis of CJD and transmissible spongiform encephalopathy (TSE) can be elusive. The World Health Organisation’s diagnostic criteria for CJD require at least one of the following: (1) Neuropathological confirmation, (2) confirmation of protease-resistant prion protein (PrP) via immunohistochemistry or Western blot, or (3) presence of scrapie-associated fibrils4. However, newer and less invasive means for diagnosis have been explored in recent literature. CSF analysis for protein 14-3-3, tau protein, S100B protein and neuron-specific enolase have demonstrated sensitivities of 93 per cent, 89 per cent, 87 per cent and 78 per cent, respectively5. In addition, the use of MRI fluid-attenuated inversion recovery (FLAIR) and diffusion weighted imaging (DWI) techniques have yielded sensitivities of 91-92 per cent and specificities of 94 -95 per cent respectively, especially when utilised early in the disease state6. In our case, the initial MRI was unremarkable and only the repeated MRI, performed three weeks after admission, revealed basal ganglia signal intensities consistent with CJD.
One of the most studied and well characterised tools used to support the diagnosis of CJD is the EEG. The typical pattern observed in the early stages of CJD is frontal intermittent rhythmic delta activities (FIRDA). As the disease progresses, characteristic periodic sharp wave complexes (PSWC) can be observed, usually 8 to 12 weeks after the onset of symptoms7. However, the reported sensitivity of EEG is relatively low, ranging from 22 to 73 per cent, and is largely dependent of the subtype of CJD8. In our case, the patient presented with NCSE, which is an uncommon presentation of an uncommon disease. In a retrospective review of 1,384 patients with probable or definite CJD, only 0.007 per cent or 10 patients presented with NCSE2. Our patient did not demonstrate EEG findings consistent with CJD until late in his hospital course. Hence, CJD must be considered as a diagnosis in a patient who presents with refractory non-convulsive status epilepticus without overlooking the more common causes9.
The last important observation is the potential utility of therapeutic hypothermia in patients with refractory NCSE. Therapeutic hypothermia has long been known to suppress epileptiform discharge10,11. However, the safety and efficacy have not been broadly studied in human subjects. Corry and colleagues conducted a small study examining the effects of therapeutic hypothermia on 4 patients with refractory status epilepticus. The results were promising in that therapeutic hypothermia was successful in aborting seizure activity in all 4 patients and effectively suppressed seizure activity in 2 of the 4 patients after re-warming12. We were able to observe similar result in achieving temporary resolution of NCSE with therapeutic hypothermia in combination with antiepileptic medication in our patient.
Assessment and evaluation are the foundations of learning; the former is concerned with how students perform and the latter, how successful the teaching was in reaching its objectives. Case based discussions (CBDs) are structured, non-judgmental reviews of decision-making and clinical reasoning1. They are mapped directly to the surgical curriculum and “assess what doctors actually do in practice” 1. Patient involvement is thought to enhance the effectiveness of the assessment process, as it incorporates key adult learning principles: it is meaningful, relevant to work, allows active involvement and involves three domains of learning2:
Clinical (knowledge, decisions, skills)
Professionalism (ethics, teamwork)
Communication (with patients, families and staff)
The ability of work based assessments to test performance is not well established. The purpose of this critical review is to assess if CBDs are effective as an assessment tool.
Validity of Assessment
Validity concerns the accuracy of an assessment, what this means in practical terms, and how to avoid drawing unwarranted conclusions or decisions from the results. Validity can be explored in five ways: face, content, concurrent, construct and criterion-related/predicative.
CBDs have high face validity as they focus on the role doctors perform and are, in essence, an evolution of ‘bedside oral examinations’3. The key elements of this assessment are learnt in medical school; thus the purpose of a CBD is easy for both trainees and assessors to validate1. In terms of content validity, CBDs are unique in assessing a student’s decision-making and which, is key to how doctors perform in practice. However, as only six CBDs are required a year, they are unlikely to be representative of the whole curriculum. Thus CBDs may have a limited content validity overall, especially if students focus on one type of condition for all assessments.
Determining the concurrent validity of CBDs is difficult as they assess the pinnacle of Miller’s triangle – what a trainee ‘does’ in clinical practice (figure1)4. CBDs are unique in this aspect, but there may be some overlap with other work based assessments particularly in task specific skills and knowledge. Simulation may give some concurrent validity to the assessment of judgment. The professional aspect of assessment can be validated by a 360 degree appraisal, as this requests feedback about a doctor’s professionalism from other healthcare professionals1.
Figure 1: Miller’s triangle4
CBDs have high construct validity, as the assessment is consistent with practice and appropriate for the working environment. The clinical skills being assessed will improve with expertise and thus there should be ‘expert-novice’ differences on marking3. However the standard of assessment (i.e. the ‘pass mark’) increases with expertise – as students are always being assessed against a mark of competency for their level. A novice can therefore score the same ‘mark’ as an expert despite a difference in ability.
In terms of predictive validity performance-based assessments are simulations and examinees do not behave in the same way as they would in real life3. Thus, CBDs are an assessment of competence (‘shows how’) but not of true clinical performance and one perhaps could deduct that they don’t assess the attitude of the trainee which completes the cycle along with knowledge and skills (‘does’)4. CBDs permit inferences to be drawn concerning the skills of examinees that extend beyond the particular cases included in the assessment3. The quality of performance in one assessment can be a poor predictor of performance in another context. Both the limited number and lack of generalizability of these assessments have a negative influence on predictive validity3.
Reliability of Assessment
Reliability can be defined as “the degree to which test scores are free from errors of measurement”. Feldt and Brennan describe the ‘essence’ of reliability as the “quantification of the consistency and inconsistency in examinee performance” 5. Moss states that less standardized forms of assessment, such as CBDs, present serious problems for reliability6. These types of assessment permit both students and assessors substantial latitude in interpreting and responding to situations, and are heavily reliant on assessor’s ability. Reliability of CBDs is influenced by the quality of the rater’s training, the uniformity of assessment, and the degree of standardization in examinee.
Rating scales are also known to hugely affect reliability – understanding of how to use these scales must be achieved by all trainee assessors in order to achieve marking consistency. In CBD assessments, trainees should be rated against a level of completion at the end of the current stage of training (i.e. core or higher training) 1. While accurate ratings are critical to the success of any WBA, there may be latitude in the interpretation of these rating scales between different assessors. Assessors who have not received formal WBA training tend to score trainees more generously than trained assessors7-8. Improved assessor training in the use of CBDs and spreading assessments throughout the student’s placement (i.e. a CBD every two months) may improve the reliability and effectiveness of the tool1.
Practicality of Assessment
CBDs are a one-to-one assessment and are not efficient; they are labour intensive and only cover a limited amount of the curriculum per assessment. The time taken to complete CBDs has been thought to negatively impact on training opportunities7. Formalized assessment time could relieve the pressure of arranging ad hoc assessments and may improve the negative perceptions of students regarding CBDs.
The practical advantages of CBDs are that they allow assessments to occur within the workplace and they assess both judgment and professionalism – two subjects on the curriculum which are otherwise difficult to assess1. CBDs can be very successful in promoting autonomy and self-directed learning, which improves the efficiency of this teaching method9. Moreover, CBDs can be immensely successful in improving the abilities of trainees and can change clinical practice – a feature than is not repeated by other forms of assessment8.
One method for ensuring the equality of assessments across all trainees is by providing clear information about what CBDs are, the format they take and the relevance they have to the curriculum. The information and guidance provided for the assessment should be clear, accurate and accessible to all trainees, assessors, and external assessors. This minimizes the potential for inconsistency of marking practice and perceived lack of fairness7-10. However, the lack of standardization of this assessment mechanism combined with the variation in training and interpretation of the rating scales between assessors may result in inequality.
Formative Assessment
Formative assessments modify and enhance both learning and understanding by the provision of feedback11. The primary function of the rating scale of a CBD is to inform the trainee and trainer about what needs to be learnt1. Marks per see provide no learning improvement; students gain the most learning value from assessment that is provided without marks or grades12. CBDs have feedback is built into the process and therefore it can given immediately and orally. Verbal feedback has a significantly greater effect on future performance than grades or marks as the assessor can check comprehension and encourage the student to act upon the advice given1,11-12. It should be specific and related to need; detailed feedback should only occur to help the student work through misconceptions or other weaknesses in performance12. Veloski, et al, suggests that systemic feedback delivered from a credible source can change clinical performance8.
For trainees to be able to improve, they must have the capacity to monitor the quality of their own work during their learning by undertaking self-assessment12. Moreover, trainees must accept that their work can be improved and identify important aspects of their work that they wish to improve. Trainee’s learning can be improved by providing high quality feedback and the three main elements are crucial to this process are 12:
Helping students recognise their desired goal
Providing students with evidence about how well their work matches that goal
Explaining how to close the gap between current performance and desired goal
The challenge for an effective CBDis to have an open relationship between student and assessor where the trainee is able to give an honest account of their abilities and identify any areas of weakness. This relationship currently does not exists in most CBDs, as studies by Veloski, et al8and Norcini and Burch9 who revealed that only limited numbers of trainees anticipated changing their practice in response to feedback data. An unwillingness to engage in formal self-reflection by surgical trainees and reluctance to voice any weaknesses may impair their ability to develop and lead to resistance in the assessment process. Improved training of assessors and removing the scoring of the CBD form may allow more accurate and honest feedback to be given to improve the student’s future performance. An alternative method to improve performance is to ‘feed forward’ (as opposed to feedback) focusing on what students should concentrate on in future tasks10
Summative Assessment
Summative assessments are intended to identify how much the student has learnt. CBDs have a strong summative feel: a minimum number of assessments are required and a satisfactory standard must be reached to allow progression of a trainee to the next level of training1. Summative assessment affects students in a number of different ways; it guides their judgment of what is important to learn, affects their motivation and self-perceptions of competence, structures their approaches to and timing of personal study, consolidates learning, and affects the development of enduring learning strategies and skills12-13. Resnick and Resnick summarize this as “what is not assessed tends to disappear from the curriculum” 13. Accurate recording of CBDs is vital, as the assessment process is transient, and allows external validation and moderation.
Evaluation of any teaching is fundamental to ensure that the curriculum is reaching its objectives14. Student evaluation allows the curriculum to develop and can result in benefits to both students and patients. Kirkpatrick suggested four levels on which to focus evaluation14:
Level 1 – Learner’s reactions Level 2a – Modification of attitudes and perceptions Level 2b – Acquisition of knowledge and skills Level 3 – Change in behaviour Level 4a – Change in organizational practice Level 4b – Benefits to patients
At present there is little opportunity within the Intercollegiate Surgical Curriculum Project (ISCP) for students to provide feedback. Thus a typical ‘evaluation cycle’ for course development (figure 2) cannot take place15. Given the widespread nature of subjects covered by CBDs, the variations in marking standards by assessors, and concerns with validity and reliability, an overall evaluation of the curriculum may not be possible. However, regular evaluation of the learning process can improve the curriculum and may lead to better student engagement with the assessment process14. Ideally the evaluation process should be reliable, valid and inexpensive15. A number of evaluation methods exist, but all should allow for ongoing monitoring review and further enquiries to be undertaken.
Figure 2: Evaluation cycle used to improve a teaching course15
Conclusion
CBDs, like all assessments, do have limitations, but we feel that they play a vital role in development of trainees. Unfortunately, Pereira and Dean suggest that trainees view CBDs with suspicion7. As a result, students do not engage fully with the assessment and evaluation process and CBDs are not being used to their full potential. The main problems with CBDs relate to the lack of formal assessor training in the use of the WBA and the lack of evaluation of the assessment process Adequate training of assessors will improve feedback and standardize the assessment process nationally. Evaluation of CBDs should improve the validity of the learning tool, enhancing the training curriculum and encouraging engagement of trainees.
If used appropriately, CBDs are valid, reliable and provide excellent feedback which is effective and efficient in changing practice. However, a combination of assessment modalities should be utilized to ensure that surgical trainees are facilitated in their development across the whole spectrum of the curriculum.
Malaria is caused by obligate intra-erythrocytic protozoa of the genus Plasmodium. Humans can be infected with one (or more) of the following five species: P. falciparum, P. vivax, P. ovale, and P. malariae and P. knowlesi. Plasmodia are transmitted by the bite of an infected female Anopheles mosquito and these patients commonly present with fever, headache, fatigue and musculoskeletal symptoms.
Diagnosis is made by demonstration of the parasite in peripheral blood smear. The thick and thin smears are prepared for identification of malarial parasite and genotype respectively. Rapid diagnosis of malaria can be done by fluorescence microscopy with light microscope and interference filter or by polymerase chain reaction.
We report a complicated case of P. ovale malaria without fever associated with Hepatitis B virus infection, pre-excitation (WPW pattern), and secondary adrenal insufficiency.
Case Report:
A 23 year old African American man presented to the emergency department with headache and dizziness for one week. He had 8/10 throbbing headaches associated with dizziness, nausea and ringing sensation in the ears and also complained of sweating but denied any fever. He had loose, watery bowel movements 3 times a day for a few days and had vomited once 5 days ago. He denied any past medical history or family history. He was a chronic smoker and smoked 1PPD for 8 years and denied alcohol or drug use. He had travelled to Africa 9 months before presentation and had stayed in Senegal for 1 month though he did not have any illnesses during or after returning from Africa.
On examination: T: 97.6, HR: 115/min, BP: 105/50, no orthostasis, SPO2: 100% in room air and RR: 18/min. Head, neck and throat examinations were normal and respiratory and cardiovascular system examinations were unremarkable except for tachycardia. Abdominal examination revealed no organomegaly and his CNS examination was unremarkable.
Laboratory examination revealed: WBC: 6.4, Hb: 14.4 and Hct: 41.3, Platelets: 43, N: 83.2, L: 7.4, M: 9.3, B: 0.1. His serum chemistry was normal except for a creatinine of 1.3 (BUN 14) and albumin of 2.6 (total protein 5.7). A pre-excitation (WPW Pattern) was seen on ECG and head CT and Chest X-ray were normal.
He was admitted to the telemetry unit to monitor for arrhythmia. Peripheral blood smear (PBS) was sent because of thrombocytopenia and mild renal failure and revealed malarial parasites later identified as P. ovale (Pic. 1 and 2).
He was treated with Malarone; yet after 2 days of treatment, he was still complaining of headache, nausea and dizziness. There were no meningeal signs. His blood pressure readings were low (95/53) and he was orthostatic. His ECG showed sinus tachycardia and did not reveal any arrhythmias or QTc prolongation. His morning serum cortisol was 6.20 and subsequent cosyntropin stimulation test revealed a serum cortisol of 13.40 at one hour after injection. His Baseline ACTH was<1.1 suggesting a secondary adrenal insufficiency. His IGF-1, TSH, FT4, FSH, LH were all within normal limits. His bleeding and coagulation parameters were normal, CD4 was 634(CD4/CD8: 1.46) and rapid oral test for HIV was negative. His Hepatitis B profile was as follows: HBsAg: positive, HBV Core IgM: negative, HBV core IgG: positive, HBeAg: negative, HBeAb: positive, HBV DNA: 1000 copies/ml, Log10 HBV DNA: 3000 copies/ml.
His Blood cultures were negative, his G6PD levels and hemoglobin electrophoresis were normal, haptoglobin was<15 and LDH was 326. MRI of the brain was unremarkable. The abdominal sonogram revealed a normal echo pattern of the liver and spleen and spleen size was 12 cm. The secondary adrenal insufficiency was treated with dexamethasone resulting in gradual improvement of his nausea, vomiting and headache. Furthermore the platelet count improved to 309. Primaquine was prescribed to complete the course of malaria treatment and he was discharged home following 8 days of hospitalization. Unfortunately he did not return for follow up.
Discussion:
Malaria continues to be a major health problem worldwide. In 2007 the CDC received reports of 1,505 cases of malaria among person in the United States. 326 cases were reported from New York with all but one of these cases being acquired outside of the United States1.
While Plasmodia are primarily transmitted through the bite of an infected female Anopheles mosquito, infections can also occur through exposure to infected blood products (transfusion malaria) or by congenital transmission. In industrialized countries most cases of malaria occur among travellers, immigrants, or military personnel returning from areas endemic for malaria (imported malaria). Exceptionally, local transmission through mosquitoes occurs (indigenous malaria). For non-falciparum malaria the incubation period is usually longer (median 15–16 days) and both P. Vivax and P. Ovale malaria may relapse months or years after exposure due to the presence of hypnozoites in the liver of which the longest reported incubation period for P. vivax being 30 years2.
Malaria without fever has been reported in cases of Plasmodium falciparum malaria in non- immune people3. Hepatitis B infection associated with asymptomatic malaria has been reported in the Brazilian Amazon4. This study was done in P. falciparum and P. vivax infected person with HBV co-infection though not in the P. ovale group. HBV infection leads to increased IFN-gamma levels5,6 which are important for plasmodium clearance in the liver7, in addition to its early importance for malarial clinical immunity8. High levels of IFN gamma, IL6 and TNF alpha are detectable in the blood of malaria patients and in the spleen and liver in the rodents’ model of malaria9,10. These inflammatory cytokines are known to suppress HBV replication in HBV transgenic mice9. This might explain the low levels of HBV viremia in our patient although human studies are required to confirm this finding.
The hypothalamic-pituitary- adrenocortical axis suppression and primary and secondary adrenal insufficiency has been reported in severe falciparum malaria10. In our case, the patient did not have any features to characterize severe malaria, and parasitaemia was <5%. Further, the MRI did not reveal any secondary cause for adrenal insufficiency. This might indicate that patients with malaria are more prone for hypothalamo-pituitary adrenocortical axis dysregulation yet further studies are required to prove this phenomenon in patients without severe malaria.
Cardiac complications after malaria have rarely been reported. In our patient pre-excitation on ECG disappeared after starting antimalarial treatment. Whether WPW pattern and its subsequent disappearance was incidental or caused by malarial infection that improved with treatment could not be determined. Lengthening of the QTc and severe cardiac arrhythmia has been observed, particularly after treatment with halofantrine for chloroquine resistant Plasmodium falciparum malaria11. Post-infectious myocarditis can be associated with cardiac events especially in combination with viral infections12. A case of likely acute coronary syndrome and possible myocarditis was reported after experimental human malaria infection13. To date, except for cardiac arrhythmias that developed after treatment with halofantrine and quinolines, no other arrhythmias has been reported in patients with malaria before treatment.
Transient thrombocytopenia is very common in uncomplicated malaria in semi -immune adults14. A person with a platelet count <150 × 109/l is 4 times more likely to have asymptomatic malarial infection than one with a count ≥150 × 109/l15. In an observational study among 131 patients, patients with involvement of more than one organ system was found to have a lower mean platelet count compared to those with single organ involvement16.
Conclusions:
Our case highlights the need for further studies to understand the multi-organ involvement in patients without severe malaria as well as early recognition of potential complications to prevent mortality and morbidity in this subgroup of patients.
In recent years, increasing attention has focused on the treatment of chronic pain with a considerable number of research and publications about it. At the same time, opioid prescription, use, abuse and death related to the inappropriate use of opioids have significantly increased over the last 10 years. Some reports indicated that there were more than 100 ‘pain clinics’ within a one-mile radius in South Florida, between 2009 and 2010, which led to the birth of new opioid prescription laws in Florida and many other states to restrict the use of opioids. In the face of clinical and social turmoil related to opioid use and abuse, a fundamental question facing each clinician is: are opioids effective and necessary for chronic non-malignant pain?
Chronic low back pain (LBP) is the most common pain condition in pain clinics and most family physician offices, which ‘requires’ chronic use of opioids. Nampiaparampil et al conducted a literature review in 20121 and found only one high-quality study on oral opioid therapy for LBP, which showed significant efficacy in pain relief and patient function. Current consensus believes that there is weak evidence demonstrating favourable effectiveness of opioids compared to placebo in chronic LBP.2Opioids may be considered in the treatment of chronic LBP if a patient fails other treatment modalities such as non-steroidal anti-inflammatory drugs (NSAIDs), antidepressants, physical therapy or steroid injections. Opioids should be avoided if possible, especially in adolescents who are at high risk of opioid overdose, misuse, and addiction. It has been demonstrated that the majority of the population with degenerative disc disease, including a disc herniation have no back pain. A Magnetic Resonance Imaging (MRI) report or film with a disc herniation should not be an automatic ‘passport’ for access to narcotics.
Failed back surgery syndrome (FBSS) is often refractory to most treatment modalities and sometimes very debilitating. There are no well-controlled clinical studies to approve or disapprove the use of opioids in FBSS. Clinical experience suggests oral opioids may be beneficial and necessary to many patients suffering from severe back pain due to FBSS. Intraspinal opioids delivered via implanted pumps may be indicated in those individuals who cannot tolerate oral medications. For elderly patients with severe pain due to spinal stenosis, there is no clinical study to approve or disprove the use of opioids. However, due to the fact that NSAIDs may cause serious side effects in gastrointestinal, hepatic and renal systems, opioid therapy may still be a choice in carefully selected patients.
Most studies for pharmacological treatment of neuropathic pain are conducted with diabetic peripheral neuropathy (DPN) patients. Several randomized clinical controlled studies have demonstrated evidence that some opioids, such as morphine sulphate, tramadol,3 and oxycodone controlled-release,4 are probably effective in reducing pain and should be considered as a treatment of choice (Level B evidence), even though anti-epileptics such as pregabalin should still be used as the first line medication.5
Some studies indicate opioids may be superior to placebo in relieving pain due to acute migraine attacks and Fiorinal with codeine may be effective for tension headache. However there is lack of clinical evidence supporting long-term use of opioids for chronic headaches such as migraine, chronic daily headache, medication overuse headache, or cervicogenic headache. Currently there are large amounts of opioids being prescribed for headaches because of patients' demands. Neuroscience data on the effects of opioids on the brain has raised serious concerns for long-term safety and has provided the basis for the mechanism by which chronic opioid use may induce progression of headache frequency and severity.6 A recent study found chronic opioid use for migraine associated with more severe headache-related disability, symptomology, comorbidities (depression, anxiety, and cardiovascular disease and events), and greater healthcare resource utilization.7
Many patients with fibromyalgia (FM) come into pain clinics to ask for, or even demand, prescriptions for opioids. There is insufficient evidence to support the routine use of opioids in fibromyalgia.8 Recent studies have suggested that central sensitization may play for role in the aetiology of FM. Three central nervous system (CNS) agents (pregabalin, duloxetine and milnacipran) have been approved by United States Food and Drug Administration (US FDA) for treatment of FM. However, opioids are still commonly prescribed by many physicians for FM patients by ‘tradition’, sometimes even with the combination of a benzodiazapine and muscles relaxant - Soma. We have observed negative health and psychosocial status in patients using opioids and labeled with FM. Opioids should be avoided whenever possible in FM patients in face of widespread abuse and lack of clinical evidence.9
Adolescents with mild non-malignant chronic pain rarely require long-term opioid therapy.10 Opioids should be avoided if possible in adolescents, who are at high risk of opioid overdose, misuse, and addiction. Patients with adolescents living at home should store their opioid medication safely.
In conclusion, opioids are effective and necessary in certain cases. However, currently no single drug stands out as the best therapy for managing chronic non-malignant pain, and current opioid treatment is not sufficiently evidence-based. More well-designed clinical studies are needed to confirm the clinical efficacy and necessity for using opioids in the treatment of chronic non-malignant pain. Before more evidence becomes available, and in the face of widespread abuse of opioids in society and possible serious behavioural consequences to individual patients, a careful history and physical examination, assessment of aberrant behavior, controlled substance agreement, routine urine drug tests, checking of state drug monitoring system (if available), trials of other treatment modalities, and continuous monitoring of opioid compliance should be the prerequisites before any opioids are prescribed.
Opioid prescriptions should be given as indicated, not as ‘demanded’.
A 69 year old male with hypertension, body mass index 24 kg/m2, neck circumference 16 inches, and moderate COPD, on home oxygen, presented to his pulmonary clinic appointment with worsening complaints of fatigue, leg cramps, and intermittent shortness of breath with chest discomfort. A remote, questionable history of syncope five to ten years ago was elicited. His vital signs were: temperature 98.80F, blood pressure 119/76 mmHg, pulse 92/min and regular, and respirations 20/min. Physical exam was significant for crowded oropharynx with a Mallampati score of four, distant breath sounds with a prolonged expiratory phase on lung exam with a normal cardiac exam. Laboratory investigation showed normal complete blood counts, haemoglobin 15 g/dL, and normal chemistries. Compared to his previous studies, a pulmonary function study showed stable parameters with a FEV1 1.47 L (69%), FVC/FEV1 ratio 0.44 (62%), and a DLCO/alveolar volume ratio of 2.12 (49%). A room air arterial blood gas revealed pH 7.41, PCO2 44 mmHg, and PO2 61 mmHg, with 92% oxygen saturation. A six minute treadmill exercise test performed to assess the need for supplemental oxygen showed that he required supplemental oxygen at 1L/min via nasal cannula to eliminate hypoxemia during exercise. His chest radiograph was significant for hyperinflation and prominence of interstitial markings. A high resolution computed tomography of the chest demonstrated severe centrilobular and panacinar emphysema only. A baseline electrocardiogram (EKG) showed normal sinus rhythm with an old anterior wall infarct (Figure 1). Echocardiography of the heart revealed a normal left ventricle with an ejection fraction of 65%. Right ventricular systolic function was normal although elevated mean pulmonary arterial pressure of 55 mmHg was noted. A diagnostic polysomnogram performed for evaluation of daytime fatigue and snoring at night revealed mild OSA with an AHI of 6/hr. with sleep time spent with oxygen saturation below 90% (T-90%) of 19%. The EKG showed normal sinus rhythm. A full overnight polysomnogram for continuous positive airway pressure (CPAP) titration performed for treatment of sleep disordered breathing was sub-optimal, however it demonstrated an apnoea–hypopnea index (AHI) of 28 during REM (rapid eye movement) sleep, and a T-90% of 93%. The associated electrocardiogram showed Wenckebach second degree AV heart block during REM sleep usually near the nadir of oxygen desaturation. On a repeat positive airway pressure titration study, therapy with Bilevel pressures (BPAP) of 18/14 cmH20 corrected the AHI and nocturnal hypoxemia to within normal limits during Non REM (NREM) and REM sleep. His electrocardiogram remained in normal sinus rhythm .A twenty-four hour cardiac holter monitor revealed baseline sinus rhythm and confirmed the presence of second degree AV block of the Wenckebach type. A one month cardiac event recording showed normal sinus rhythm with frequent episodes of second degree AV block. These varied from Type I progressing to Type II with a 2:1 and 3:1 AV block, during sleep. Progression to complete heart block was noted with the longest pause lasting 3.9 seconds during sleep. The patient underwent an electrophysiology study with placement of a dual chamber pacemaker. He was initiated on BIPAP therapy. Subsequently, the patient was seen in clinic with improvements in his intermittent episodes of shortness of breath, fatigue, and daytime sleepiness.
Figure 1- Patient’s baseline EKG, normal sinus rhythm. Figure 2 -Progression to Mobitz Type II block 5:07 am. Figure 3 and 4- Sinus pauses, longest interval 11:07 pm 3.9 seconds (Figure 4).
Discussion
In healthy individuals, especially athletes, bradycardia, Mobitz I AV block, and sinus pauses up to 2 seconds are common during sleep and require no intervention5. Cardiac rhythm is controlled primarily by autonomic tone. NREM sleep is accompanied by an increase in parasympathetic, and a decrease in sympathetic, tone. REM sleep is associated with decreased parasympathetic tone and variable sympathetic tone. Bradyarrhythmias in patients with OSA are related to the apnoeic episodes and over 80% are found during REM sleep. During these periods of low oxygen supply, increased vagal activity to the heart resulting in bradyarrhythmias may actually be cardioprotective by decreasing myocardial oxygen demand. This may be important in patients with underlying coronary heart disease.
Some studies have found that Mobitz I AV block may not be benign. Shaw6 et al studied 147 patients with isolated chronic Mobitz I AV block. They inserted pacemakers in 90 patients, 74 patients were symptomatic and 16 patients received a pacemaker for prophylaxis. Outcome data included five-year survival, deterioration of conduction to higher degree AV block, and new onset of various forms of symptomatic bradycardia. They concluded that survival was higher in the paced groups and that risk factors for poor outcomes in patients with Mobitz I included age greater than 45 years old, symptomatic bradycardia, organic heart disease, and the presence of a bundle branch block on EKG.
The Sleep Heart Health Study7 found a higher prevalence of first and second-degree heart block among subjects with sleep-disordered breathing (SDB) than in those without (1.8% vs. 0.3% and 2.2 vs. 0.9%, respectively). Gami et al8observed thatupon review of 112 Minnesota residents who hadundergone diagnostic polysomnography and subsequentlydied suddenly from a cardiac cause, sudden death occurred between the hours of midnight and 6:00 AM in 46% of those with OSA, as compared with 21% of those without OSA. In a study of twenty-three patients with moderate to severe OSA who were each implanted with an insertable loop recorder, about 50% were observed to have frequent episodes of bradycardia and long pauses (complete heart block or sinus arrest) during sleep9. These events showed significant night-to-night intra individual variability and their incidence was under-estimated, only 13%, by conventional short-term EKG Holter recordings.
Physiologic factors predisposing patients with OSA to arrhythmias include alterations in sympathetic and parasympathetic nervous system activity, acidosis, apnoea’s, and arousal2, 10, 11. Some patients with OSA may have an accentuation of the ‘Diving Reflex’. This protective reflex consists of hypoxemia-induced sympathetic augmentation to muscles and vascular beds associated with increased cardiac vagal activity which results in increased brain perfusion, bradycardia and decreased cardiac oxygen demand. In patients with cardiac ischemia, poor lung function (i.e. COPD), or both, it may be difficult to differentiate between these protective OSA-associated Bradyarrhythmias and those which may lead to sudden death. It has been well established that patients with COPD are at higher risk for cardiovascular morbidity12 and arrythmias13. Fletcher14 and colleagues reported that the effects of oxygen supplementation on AHI, hypercapnea and supraventricular arrhythmias in patients with COPD and OSA were variable. Out of twenty obese men with COPD studied, in most patients oxygen eliminated the bradycardia observed during obstructive apnoea’s and eliminated AV block in two patients. In some patients supplemental oxygen worsened end-apnoea respiratory acidosis however this did not increase ventricular arrhythmias.
CPAP therapy has been demonstrated to significantly reduce sleep–related Bradyarrhythmias, sinus pauses, and the increased risk for cardiac death 9, 15. Despite this, in certain situations placement of a pacemaker may be required. These include persistent life-threatening arrhythmias present in patients with severe OSAS on CPAP, arrhythmias in patients who are non-compliant with CPAP, and in patients who may have persistent sympathovagal imbalance and hemodynamic fluctuations resulting in daytime bradyarrhythmias16.
Our case is interesting since it highlights the importance of recognizing the association between OSA, COPD, and life-threatening cardiac arrhythmias. Primary care providers should note the possible association of OSA-associated bradyarrhythmias with life-threatening Type II bradyarrhythmias and pauses. Since bradyarrhythmias related to OSA are relieved by CPAP, one option would be to treat with CPAP and observe for the elimination of these arrhythmias using a 24hour holter or event recorder17. Compliance with CPAP is variable and if life-threatening bradycardia is present, placement of a permanent pacemaker may be preferred18.
Our patient is unusual because most studies showing a correlation with the severity of OSA and magnitude of bradycardia have included overweight patients without COPD19. This patient’s electrocardiogram revealed a Type II AV block at 5am (Figure 2). This is within the overnight time frame where patients with OSA have been observed to have an increased incidence of sudden death. Figures 3 and 4 show significant sinus pauses. In selected cases where patients have significant co-morbidities (i.e. severe COPD with OSA), in addition to treatment with positive airway pressure, electrophysiological investigation with placement of a permanent pacemaker may be warranted.
Nosocomial pneumonia in patients receiving mechanical ventilation, also called ventilator-associated pneumonia (VAP), is an important nosocomial infection worldwide which leads to an increased length of hospital stay, healthcare costs, and mortality.(1,2,3,4,5) The incidence of VAP ranges from 9% to 27% with a crude mortality rate that can exceed up to 50%. (6,7,8,9) Aspiration of bacteria from the upper digestive tract is an important proposed mechanism in the pathogenesis of VAP.(9, 10) The normal flora of the oral cavity may include up to 350 different bacterial species, with tendencies for groups of bacteria to colonize different surfaces in the mouth. For example, Streptococcus mutans, Streptococcus sanguis, Actinomyces viscosus, and Bacteroides gingivalis mainly colonize the teeth; Streptococcus salivarius mainly colonizes the dorsal aspect of the tongue; and Streptococcus mitis is found on both buccal and tooth surfaces.(11) Because of a number of processes, however, critically ill patients lose a protective substance called fibronectin from the tooth surface. Loss of fibronectin reduces the host defence mechanism mediated by reticuloendothelial cells. This reduction in turn results in an environment conducive to attachment of microorganism to buccal and pharyngeal epithelial cells.(12) Addressing the formation of dental plaque and its continued existence by optimizing oral hygiene in critically ill patients is an important strategy for minimizing VAP.(13) Two different interventions aimed at decreasing the oral bacterial load are selective decontamination of the digestive tract involving administration of non absorbable antibiotics by mouth, through a naso-gastric tube, and oral decontamination, which is limited to topical oral application of antibiotics or antiseptics.(14) Though meta-analysis of antibiotics in decontamination of digestive tracts have found positive results(15) , the use of this intervention is, however, limited by concern about the emergence of antibiotic resistant bacteria.(16) One alternative to oral decontamination with antibiotics is to use antiseptics, such as chlorhexidine which act rapidly at multiple target sites and accordingly may be less prone to induce drug resistance.(17) Recently a meta-analysis of four trials on chlorhexidine failed to show a significant reduction in rates of ventilator associated pneumonia(18) but, subsequent randomised controlled trials, however, suggested benefit from this approach.(19) Current guidelines from the Centres for Disease Control and Prevention recommend topical oral chlorhexidine 0.12% during the perioperative period for adults undergoing cardiac surgery (grade II evidence). The routine use of antiseptic oral decontamination for the prevention of ventilator associated pneumonia, however, remains unresolved.(8) Despite the lack of firm evidence favouring this preventive intervention, a recent survey across 59 European intensive care units from five countries showed that 61% of the respondents used oral decontamination with chlorhexidine. As the emphasis on evidence based practice is increasing day by day, integrating recent evidence by meta-analysis could greatly benefit patient care and ensure safer practices. Hence we carried out this meta-analytic review to ascertain the effect of oral decontamination using chlorhexidine in the incidence of ventilator associated pneumonia and mortality in mechanically ventilated adults.(20)
Methods
Articles published from 1990 to May 2011 in English which were indexed in the following databases were searched: CINAHL, MEDLINE, Joanna Briggs Institute, Cochrane Library, EMBASE, CENTRAL, and Google search engine. We also screened previous meta-analyses and the references lists from all the retrieved articles for additional studies. Further searches were carried out in two trial registers (www.clinicaltrials.gov/ and www.controlled-trials.com/) and on web postings from conference proceedings, abstracts, and poster presentations.
Articles retrieved were assessed for inclusion criteria by three independent reviewers from the field of nursing with masters degrees. The inclusion criteria set for this meta-analysis were as follows: a) VAP definition meeting both clinical and radiological criteria b) Intubation for more than 48 hours in ICU.
We excluded the studies where clinical pulmonary infection score alone was considered for diagnosing VAP. Thereafter the articles were evaluated for randomisation, allocation concealment, blinding techniques, clarity of inclusion and exclusion criteria, outcome definitions, similarity of baseline characteristics, and completeness of follow-up. We considered randomisation to be true if the allocation sequence was generated using computer programs, random number tables, or random drawing from opaque envelopes. Finally, based on the above characteristics, only 9 trials which fulfilled the inclusion criteria was included for the pooled analysis. A brief summary of the 9 trials were listed in Table 1. The primary outcomes in this meta-analysis were incidence of VAP and mortality rate.
Table 1: Brief summary of trials
Source
Subjects
Intervention
ComparedWith
Outcome with respect to VAP
Outcome with respect to Mortality
C
E
C
E
DeRiso et al., 1996
353- Open Heart surgery patients
Chlorhexidine 0.12% 15 ml preoperatively and twice daily postoperatively until discharge from intensive care unit or death
Placebo
9/180
3/173
10/180
2/173
Fourrier et al., 2000
60- Medical and surgical patients
Chlorhexidine gel 0.2% dental plaque decontamination 3 times daily, compared with bicarbonate solution rinse 4 times daily followed by oropharyngeal suctioning until 28 days discharge form ICU or death
Standard treatment
15/30
5/30
7/30
3/30
Houston et al., 2002
561- cardiac surgery patients
Chlorhexidine 0.12% rinse compared with Listerine preoperatively and twice daily for 10 days postoperatively or until extubation, tracheostomy, death, or diagnosis of pneumonia.
Standard treatment
9/291
4/270
NA
NA
MacNaughton et al., 2004
194 – Medical and surgical patients
Chlorhexidine 0.2% oral rinse twice daily until extubation or death
Placebo
21/101
21/93
29/93
29/101
Fourrier et al., 2005
228 –ICU patients
Chlorhexidine 0.2% gel three times daily during stay in intensive care unit until 28 days
Placebo
12/114
13/114
24/114
31/114
Segers et al.,2005
954 – cardiac surgery patients
Chlorhexidine 0.12%, nasal ointment, and 10 ml oropharynx rinse four times daily on allocation and admission to hospital until extubation or removal of nasogastric tube
Placebo
67/469
35/485
6/469
8/485
Boop et al., 2006
5- cardiac surgery patients as pilot study
0.12% chlorhexidine gluconate oral care twice daily until discharge
Standard treatment
1/3
0/2
NA
NA
Koeman et al., 2006
385 –General ICU patients
2 treatment group: 2%Chlorhexidine, chlorhexidine and colistin, placebo four times daily until diagnosis of ventilator associated pneumonia, death, or extubation
Placebo
23/130
13/127
39/130
49/127
Tontipong et al., 2008
207 –General medical ICU or wards
2% chlorhexidine solution times per day until endotracheal tubes were removed.
Standard treatment
12/105
5/102
37/105
36/102
NA-Not available; C-Control group; E- Experimental group
Data analysis
Meta-analysis was performed in this study by using Review Manager 4.2 (Cochrane Collaboration, Oxford) with a random effect model. The pooled effects estimates for binary variables were expressed as a relative risk with 95% confidence interval. Differences in estimates of intervention between the treatment and control groups for each hypothesis were tested using a two sided z test. We calculated the number of patients needed to treat (NNT, with 95% confidence interval) to prevent one episode of ventilator associated pneumonia during the period of mechanical ventilation. A chi-squared test was used to assess the heterogeneity of the results. A Forest plot graph was drawn using Stats direct software version 2.72 (England: Stats Direct Ltd. 2008). We considered a two tailed P value of less than 0.05 as significant throughout the study.
Results
Effect of Chlorhexidine in reducing the Incidence of VAP
A total of nine trials were included in this meta-analysis(19,21,22,23,24,25,26,27,28). Pooled analysis of the nine trials with 2819 patients revealed a significant reduction in the incidence of VAP using chlorhexidine (Relative risk 0.60, 0.47 to 0.76; P< 0.01) (Figure 1). In relation to the Number Needed to Treat (NNT), 21 patients would need to receive oral decontamination with Chlorhexidine to prevent one episode of Ventilator associated pneumonia (NNT 21, 14 to 38).
Figure 1: Forest Plot showing the effect of Chlorhexidine oral decontamination in preventing the incidence of ventilator-associated pneumonia. Test for heterogeneity:χ2 =15.5, df =8, p < 0.01. Test for overall effect: z =4.33, p <0.05.
Effect of Chorhexidine in overall mortality rate
For assessing the outcomes in terms of mortality, only seven out of nine trials were included, since the other two(23,27) did not report the mortality rate. Pooled analysis of the seven trials with 2253 patients revealed no significant effect in reducing the overall mortality rate in patient who received chlorhexidine oral decontamination.(Relative risk 1.02, 0.83 to 1.26; P= 0.781 (Figure 2).
Figure 2: Forest plot showing the effect of Chlorhexidine oral decontamination in reducing overall mortality rate. Test for heterogeneity:χ2 =0.05, df =6, p = 0.81. Test for overall effect: z =0.27, p = 0.78
Discussion
The effectiveness of oral decontamination to prevent VAP in patients undergoing mechanical ventilation has remained controversial since its introduction, due to partly discordant results of individual trials. In the present meta-analysis nine trials were included to estimate the pooled effect size; the results revealed a significant reduction in the incidence of VAP among patients who were treated with oral chlorhexidine. But, it had no effect in reducing the overall mortality rate among these patients. There is a firm body of evidence that oropharyngeal colonization is pivotal in the pathogenesis of VAP. More than 25 years ago, Johanson et al described associations between increasing severity of illness, higher occurrence of oropharyngeal colonization, and an increased risk of developing VAP .(29,30)Subsequently, cohort and sequential colonization analyses identified oropharyngeal colonization as a important risk factor for VAP. (31,32,33) Our finding confirms the pivotal role of Oro- pharyngeal colonization in the pathogenesis of VAP , since this meta-analysis indicates that oral decontamination may reduce the incidence of VAP. Chlorhexidine was proven to have excellent antibacterial effects, with low antibiotic resistance rates seen in nosocomial pathogens, despite long-term use(34). Previous meta-analyses examining the effect of prophylaxis using selective decontamination of the digestive tract reported a significant reduction in the incidence of ventilator associated pneumonia(35,36,37). The most recent meta-analysis indicated that such an intervention combined with prophylactic intravenous antibiotics reduces overall mortality(38). In comparison our review suggests that oral antiseptic prophylaxis alone can significantly reduce the incidence of ventilator associated pneumonia, but not mortality. A similar result was documented by Ee Yuee Chan et al (2007)(14) who performed a meta-analysis with seven trials with a total of 2144 patients and found a significant result (Odds ratio 0.56, 0.39 to 0.81). Another comparable finding in the present study was, Mortality rate was not influenced by use of Chlorhexidine use, which was in line with the findings of Ee Yuee Chan et al (2007)(14) . Our meta-analysis on Chorhexidine differs from the findings of Pineda et al, who pooled four trials on chlorhexidine and did not report lower rates of ventilator associated pneumonia (odds ratio 0.42, 0.16-1.06; P=0.07)(18) . Our results also extend those of Chlebicki et al, who did not find a statistically significant benefit using the more conservative random effects model after pooling seven trials on chlorhexidine (relative risk 0.70, 0.47- 1.04; P=0.07), although their results were significant with the fixed effects model(39). Our meta-analysis included larger data set with a total of 9 trials including recent trials(28) which further adds strength to our analysis.
Limitations
Though our literature search was comprehensive, it is possible that we missed other relevant trials. Electronic and hand searches do not completely reflect the extent of research outcomes. For example, trials reported at conferences are more likely than trials published in journals to contain negative reports. In addition, more positive than negative results tend to be reported in the literature. This failure to publish more studies with negative outcomes is probably more due to authors’ lack of inclination to submit such manuscripts than to the unwillingness of editors to accept such manuscripts. Furthermore, many studies not published in English were not included e.g. a study by Zamora Zamora F (2011).(40) These limitations may lead to a risk for systematic reviews to yield a less balanced analysis and may therefore affect the recommendations resulting from the reviews. In addition, the heterogeneity which we found among the trials with respect to populations enrolled, regimens used, outcome definitions, and analysis strategies, may limit the ability to generalize results to specific populations.
Conclusion
The finding that chlorhexidine oral decontamination can reduce the incidence of ventilator associated pneumonia could have important implications for lower healthcare costs and a reduced risk of antibiotic resistance compared with the use of antibiotics. These results should be interpreted in light of the moderate heterogeneity of individual trial results and possible publication bias. It may not be prudent to adopt this practice routinely for all critically ill patients until strong data on the long term risk of selecting antiseptic and antibiotic resistant organisms are available. Nevertheless, Chlorhexidine oral decontamination seems promising. Further studies are clearly needed in testing the effect of Chlorhexidine in specific populations with standard protocols (which includes specific concentration, frequency, and type of agents) to generalize the findings. Studies also may be done to test the effect of different oral antiseptics in reducing VAP, so as to enrich the body of knowledge within this area.
Although one cell type may predominate, teratomas usually comprise of tissue from all three embryonic germ layers1. Generally arising from the gonads, they may be found in extra-gonadal sites such as sacro-coccygeal region, mediastinum, neck and retroperitoneum.2 Here we report a case of retroperitoneal teratoma in an adult with successful surgical treatment. Its clinical presentation, diagnosis and treatment are reviewed.
Case Report:
A woman aged 28 years presented with pain in the right hypochondrium of one year duration. There was no associated bowel or urinary symptom. Examination showed minimal fullness in the right hypochondrium. Routine blood tests and urinalysis were within normal limits. A plain abdominal radiograph showed calcification in the right side of the abdomen (Fig. 1). Ultrasonography demonstrated 13.6 x 8.1 cm soft tissue mass in the retro-peritoneum between liver and the right kidney. It was heterogeneous, well circumscribed with sharply defined borders, and had some calcification and cystic areas. CT abdomen revealed a hypo-dense lesion between liver and the right kidney. It had fatty attenuation with internal hyper-dense areas representing calcification. (Fig. 2). Provisional diagnosis of a retroperitoneal teratoma was made and an open exploration was performed with a right sub-costal incision. There was a large cystic mass behind the ascending colon, duodenum and the pancreas. It was located in the retroperitoneal compartment. There were dense, fibrous adhesions of the mass with aorta but entire cystic mass was excised successfully.
Post operatively this tumor mass measuring 5 x 5 cm was excised in vitro and found to be filled with yellowish creamy material containing hair, sebum and bony tissue. Microscopically it was confirmed to be a cystic teratoma with no malignancy. Stratified squamous epithelium with sebaceous and sweat glands, hair shafts, calcification, few bony spicules and bone marrow elements were all demonstrated. (Fig. 3). The post operative course was uneventful and she was well at the 2 months follow up.
Figure 1. Plain abdominal radiograph showing radio-opaque shadow (arrow heads) in the right upper abdomen.
Figure 2: Computed Tomography showing an encapsulated mass that contains multiple tissue elements including fat and areas of calcification.
Figure 3: Microscopic examination of the tumor showing squamous epithelium (SE), hair shaft (HS), sebaceous glands (SBG)
Discussion:
Teratomas are congenital tumours arising from pluri-potential embryonic cells and therefore have several recognizable somatic tissues3, Teratomas are usually localized to the ovaries, testis, anterior mediastinum or the retro-peritoneal area in descending order of frequency.4 Teratomas constitute less than 10% of all primary retroperitoneal tumours and hence are relatively uncommon5. Furthermore, retroperitoneal teratomas occur mainly in children and have been very rarely described in the adults. Half of these cases present in children less than 10 years of age and only a fifth of them present after 30 years of age. Retroperitoneal teratomas are often located near the upper pole of the kidney with preponderance on the left. The case described here is therefore unusual in that it was a primary retroperitoneal teratoma in an adult, on the right side and with adhesions to the aorta.
Retroperitoneal teratomas are seen in females twice as commonly than males. Teratomas are usually benign if they are cystic and contain sebum or mature tissue. They are more likely to be malignant if they are solid and have immature embryonic tissue like fat, cartilage, fibrous and bony elements.6 In these regards our case is similar to other described cases as our patient is also female and as her teratoma was cystic, it showed lack of malignancy.
Teratomas are usually asymptomatic as the retroperitoneal space is extensive enough to allow for their free growth. When compression of the surrounding structure occurs, patients may get compression symptoms.The diagnosis of a retroperitoneal teratoma cannot be made on clinical grounds alone. Ultrasound and computed tomography are important in its diagnosis and may show the presence of calcification, teeth or fat. Calcification on the rim of tumour or inside the tumour is seen in 53-62% of teratomas and although radiologically three quarters of patients with a benign teratoma may have calcification within it, a quarter of malignant cases may also demonstrate calcification. Computed tomography is better than Ultrasonography in defining the extent and spread of teratoma to the surrounding organs.7
The prognosis is excellent for benign retroperitoneal teratoma if complete resection can be accomplished.
Nerve blocks have a variety of applications in anaesthesia enabling an extra dimension for patients with regards to their pain control and anaesthetic plan. Anaesthetists can perform nerve blocks by a range of methods including landmark techniques and ultrasound guidance, with both of these techniques having the potential to be used with a nerve stimulator.
Nerve blocks are associated with complications including nerve damage, bleeding, pneumothorax and failure. Ultrasound, if used correctly, may help limit such complications.1 NICE guidance on the use of ultrasound guidance for procedures, has evolved over the years. Ultrasound guidance is now considered an essential requirement for the placement of central venous lines2 and is recommended when performing nerve blocks.3
Method
This survey aimed to assess the methods used by anaesthetists in performing nerve blocks and audited the use and competencies of clinicians in performing such blocks under ultrasound guidance and landmark techniques. This survey also looked at whether performing nerve blocks under ultrasound guidance was hindered by the lack of availability of appropriate resolution ultrasound machines in the workplace.
A paper survey was completed by anaesthetists of all grades at Kettering general hospital, UK and Birmingham Heartlands Hospital, UK between October and December 2011. The survey consisted of a simple, easy to use, tick box table and a generic area in which participants made further contributions. From this we ascertained the following:
Grade of clinician.
Any courses undertaken in ultrasound guided nerve blocks.
Which nerve blocks the clinicians felt they could perform competently with either method (landmark versus ultrasound guided).
In the event the anaesthetist could perform a block with or without ultrasound guidance; which method was used if ultrasound equipment was available.
Was the ability to perform ultrasound guided nerve blocks limited by the availability of an ultrasound machine.
The term “landmark technique” is used when the landmark technique is combined with or without a nerve stimulator and the term “ultrasound technique” when ultrasound guidance is used with or without a nerve stimulator.
Results
We surveyed a total of 52 anaesthetists, subdivided into Consultants 26 (50%), ST/staff grade 17 (33%), CT trainees 9 (17%). Of all grades, only 50% had completed a course in ultrasound guided nerve blocks. 42% of clinicians had encountered situations when they could not use ultrasound guidance for a nerve block because there was no ultrasound machine available at the time of the procedure.
The competencies of clinicians with the landmark and ultrasound technique varied depending on the type of nerve block and the grade of clinician (figure 1).
Various routinely performed blocks were surveyed and this revealed a good comparison of the use of ultrasound and landmark technique. For the Interscalene block, the consultants and middle grades combined were competent in performing this block, with the landmark technique 56% and the ultrasound technique 33%. For the Lumbar plexus block, 0% of the consultants surveyed felt competent in performing this block with the ultrasound technique compared to 73% with the landmark technique. The majority of clinicians felt competent in performing the TAP block with the ultrasound technique, 65% versus 35%, for the landmark technique.
Consultant (%) n-26
ST/Staff Grade (%) n-17
CT1/2 (%) n-9
Nerve block
Competent Landmark
Competent US
Competent Landmark
Competent US
Competent Landmark
Competent US
Brachial Plexus
Interscalene
54
34
58
29
0
0
Supra/Infra clavicular
31
23
29
18
0
0
Axillary
31
31
47
18
0
0
Elbow
12
19
29
12
0
0
Lumbar Plexus
73
0
65
12
11
0
Sciatic
Anterior
39
8
64
12
0
0
Posterior
42
27
76
18
0
0
Femoral
100
69
100
76
36
11
Epidural
100
19
100
18
36
0
Spinal
100
12
100
18
56
0
Abdominal
TAP
38
85
29
65
33
11
Rectus Sheath
19
35
18
47
0
11
Figure 1. This table illustrates competencies for different nerve blocks with the landmark technique and ultrasound technique for different grades of anaesthetists.
Discussion
The findings of this survey and audit have a range of implications for anaesthetists in the workplace:
1) Junior grades of doctors do not feel competent in performing nerve blocks. This may lead to a reliance on senior doctors during on calls to assist in performing blocks such as femoral and TAP blocks. Specific training geared towards junior doctors to make them proficient in such blocks would enable them to provide an anaesthetic plan with more autonomy.
2) A large percentage of consultant grade clinicians felt competent in performing nerve blocks with the landmark technique but not in performing the same blocks with ultrasound guidance. This has implications for training because consultants are the training leads for junior grades of anaesthetists. If consultants do not feel competent in the use of ultrasound guidance for nerve blocks, this could lead to a self perpetuating cycle.
3) Only 50% of clinicians in this survey had completed a course for ultrasound guided nerve blocks, this coupled with the finding that clinicians did not feel comfortable in performing nerve blocks with ultrasound, indicates the possible need for local training accessible to clinicians to improve their everyday practice.
4) It has been shown that ultrasonic guidance improves the success rate of interscalene blocks.4 The practice amongst clinicians in this survey reveals that the majority of anaesthetists (middle and consultant grades) are competent with the landmark technique 56% compared to the ultrasound technique 36%. This also highlights a training deficit which if addressed would enable clinicians to offer a more successful method of performing the interscalene block.
5) This survey highlighted the lack of availability of appropriate ultrasound machines in different departments, leading to some clinicians utilising the landmark technique, when ultrasound guidance was the preference. This has the potential of a patient receiving a nerve block technique which may have been riskier and less efficient. This highlights a potential need for investment and accessibility of appropriate resolution ultrasound machines in the different work places of a hospital environment.
The main limitation of this project was the small number of clinicians in the respective hospitals the survey was performed in. However, we feel the results reflect the practice of clinicians across most anaesthetic departments. The recommendations highlight a training need for anaesthetic trainees in the use of ultrasound guided nerve blocks. This survey could form the basis of a much larger survey of clinicians across the UK to provide a more insightful review of the competencies and preferences of anaesthetic trainees in performing nerve blocks and the availability of appropriate resolution ultrasound machines.
The difference in the number of clinicians in each category limited comparisons between groups. A larger cohort of participants would enable comparison of nerve block techniques between different grades of clinicians.
This survey included all clinicians regardless of their sub-specialist interest. This may result in a skewing of results, depending on the area of interest of the clinicians surveyed.
This work only highlights the competencies and preferences of clinicians in performing nerve blocks. No extrapolation can be made to complications that arise from the choice of either technique. Studies have shown an improved success rate when performing nerve blocks with ultrasound.4 However this does not directly apply to a specific clinician who may have substantial experience in their method of choice in performing a nerve block.
Even though it is commonly seen in Graves' disease, TPP is not related to the etiology, severity, and duration of thyrotoxicosis. 1
The pathogenesis of hypokalaemic periodic paralysis in certain populations with thyrotoxicosis is unclear. Transcellular distribution of potassium is maintained by the Na+/K+–ATPase activity in the cell membrane, and it is mainly influenced by the action of insulin and beta-adrenergic catecholamines.2 Hypokalemia in TPP results from an intracellular shift of potassium and not total body depletion. It has been shown that the Na+/K+–ATPase activity in platelets and muscles is significantly higher in patients with TPP.3 Hyperthyroidism may result in a hyperadrenergic state, which may lead to the activation of the Na+/K+–ATPase pump and result in cellular uptake of potassium.2, 4, 5 Thyroid hormones may also directly stimulate Na+/K+– ATPase activity and increase the number and sensitivity of beta receptors.2, 6 Patients with TPP have been found to have hyperinsulinemia during episodes of paralysis. This may explain the attacks after high-carbohydrate meals.7
CASE REPORT:
A 19 year old male patient presented to our emergency room with sudden onset weakness of lower limbs. He was not able to stand or walk. Power of 0/5 in both lower limbs and 3/5 in upper limbs was noticed on examination. Routine investigations revealed to have severe hypokalemia with a serum potassium of 1.6 meq/l (normal range 3.5-5.0 meq/l), a serum phosphorus level of 3.4 mg/dl (normal range 3-4.5 mg/dl) and mild hypomagnesemia with serum magnesium level of 1.5mg/dl (normal range 1.8-3.0 mg/dl). ECG showed hypokalemic changes with prolonged PR interval, increased P-wave amplitude and widened QRS complexes. He was managed on intravenous as well oral potassium and history revealed weight loss, increased appetite and tremors from past 4 months. He had a multinodular goiter and radioactive iodine uptake scan (Iodine 131) showed a toxic nodule (Toxic nodule shows increased iodine uptake while the rest of the gland is suppressed) with no exophthalmos, sensory or cranial nerve deficits. Thyroid function tests revealed thyrotoxicosis with free T4 of 4.3ng/dl (normal range 0.8-1.8ng/dl), T3 of 279 ng/dl (normal range = 60 - 181 ng/dl) and a TSH level of <0.15milliunits/L (normal range = 0.3 - 4 milliunits/L). He was managed on intravenous potassium & propanolol. The patient showed dramatic improvement of his symptoms. The patient was discharged home on carbamazole with the diagnosis of TPP secondary to toxic nodular goiter.
In this case there was a significant family history as one of his elder brother had a sudden death (cause not known) and his mother was primary hypothyroid on levothyroxin replacement therapy.
DISCUSSION :
TPP is seen most commonly in Asian populations, with an incidence of approximately 2% in patients with thyrotoxicosis of any cause.1,8,9,10 The attacks of paralysis have a well-marked seasonal incidence, usually occurring during the warmer months.1 Pathogenesis of hypokalaemia has been explained by some authors to be due to an intracellular shift of body potassium, which is catecholamine mediated.11,12 Shizume and his group studied total exchangeable potassium which revealed that patients with thyrotoxic periodic paralysis were not significantly different from controls when the value was related to lean body mass.11 The paralytic symptoms and signs improve as the potassium returns from the intracellular space back into the extracellular space.13 The diurnal variation in potassium movement where there is nocturnal potassium influx into skeletal muscle would explain the tendency for thyrotoxic periodic paralysis to occur at night.14 Hypophosphataemia and hypomagnesaemia are also known to occur in association with thyrotoxic periodic paralysis.14,15,16,17,18 The correction of hypophosphataemia without phosphate administration supports the possibility of intracellular shift of phosphate.16 Electrocardiographic findings supportive of a diagnosis of TPP rather than sporadic or familial periodic paralysis are sinus tachycardia, elevated QRS voltage and first-degree AV block (sensitivity 97%, specificity 65%).20 In addition to ST-segment depression, T-wave flattening or inversion and the presence of U waves are typical of hypokalaemia.
The management is to deal with the acute attack as well as treatment of the underlying condition to prevent future attacks. Rapid administration of oral or intravenous potassium chloride can abort an attack and prevent cardiovascular and respiratory complications.4 A small dose of potassium is the treatment of choice for facilitating recovery and reducing rebound hyperkalaemia due to release of potassium and phosphate from the cells on recovery.1,2,3 Rebound hyperkalaemia occurred in approximately 40% of patients with TPP, especially if they received >90 mmol of potassium chloride within the first 24 hours.4 Another mode of treatment is to give propranolol, a nonselective b-blocker, which prevents the intracellular shift of potassium and phosphate by blunting the hyperadrenergic stimulation of Na+/K+–ATPase.20 Hence, initial therapy for stable TPP should include propranolol.21,22,23 The definitive therapy for TPP includes treatment of hyperthyroidism with antithyroid medications, surgical thyroidectomy, or radioiodine therapy.
Normal sleep is divided into Non-REM and REM. REM occurs every 90-120 minutes during adult sleep throughout the night with each period of REM progressing in length such that the REM periods in the early morning hours are the longest and may last from 30-60 minutes. Overall, REM accounts for 20-25% of the sleep time but is weighted toward the second half of the night. During REM sleep with polysomnography monitoring one observes a low voltage mixed frequency amplitude EEG and low voltage EMG in the chin associated with intermittent bursts of rapid eye movements. During the periods of REM breathing becomes irregular, blood pressure rises and the heart rate also increases due to excess adrenergic activity. The brain is highly active during REM and the electrical activity recorded in the brain by EEG during REM sleep is similar to that of wakefulness.
Parasomnias are undesirable, unexpected, abnormal behavioral phenomena that occur during sleep. There are three broad categories in parasomnias. They are
Disorders of Arousal (from Non-REM sleep)
Parasomnias usually associated with REM sleep, and
Other parasomnias which also includes secondary type of parasomnias.
RBD is the only parasomnia which requires polysomnographic testing as part of the essential diagnostic criteria.
Definition of RBD
“RBD is characterized by the intermittent loss of REM sleep electromyographic (EMG) atonia and by the appearance of elaborate motor activity associated with dream mentation” (ICSD-2).1 These motor phenomena may be complex and highly integrated and often are associated with emotionally charged utterances and physically violent or vigorous activities. RBD was first recognized and described by Schenck CH et al. in 1986.2 This diagnosis was first incorporated in the International Classification of Sleep Disorders (ICSD) in 1990. (American Academy of Sleep Medicine)
A defining feature of normal REM sleep is active paralysis of all somatic musculature (sparing the diaphragm to permit ventilation). This result in diffuse hypotonia of the skeletal muscles inhibiting the enactment of dreams associated with REM sleep. In RBD there is an intermittent loss of muscle atonia during REM sleep that can be objectively measured with EMG as intense phasic motor activity (figure 1 and 2).
Figure 1
Figure 2
This loss of inhibition often precedes the complex motor behaviors during REM sleep. Additionally, RBD patients will report that their dream content is often very violent or vigorous dream enacting behaviors include talking, yelling, punching, kicking, sitting, jumping from bed, arm flailing and grabbing etc. and most often the sufferer will upon waking from the dream immediately report a clear memory of the dream which coincides very well with the high amplitude violent defensive activity witnessed. This complex motor activity may result in a serious injury to the dreamer or bed partner that then prompts the evaluation.
Prevalence
The Prevalence of RBD is about 0.5% in general population.1, 3 RBD preferentially affect elderly men (in 6th and 7th decade) with ratio of women to men being 1 to 9.4 The mean age of disease onset is 60.9 years and at diagnosis is 64.4 years.5 RBD was reported in an 18 year old female with Juvenile Parkinson disease,6 so age and gender are not absolute criteria.
In Parkinson disease (PD) the reported prevalence ranges from 13-50%,7, 14-19 LewyBody Dementia (DLB) 95%,8 and Multiple System Atrophy (MSA) 90 %.9 The presence of RBD is a major diagnostic criterion for MSA. RBD has been reported in Juvenile Parkinson disease, and pure autonomic failure10-12 all neurodegenerative disorders are synucleinopathies.13
Physiology
The neurons of locus coeruleus, raphe nuclei, tuberomammillary nucleus, pedunculopontine nucleus, laterodorsal tegmental area and the perifornical area are firing at a high rate, and cause arousal by activating the cerebral cortex. During REM sleep, the aforementioned excitatory areas fall silent with the exception of the pedunculopontine nucleus and laterodorsal tegmental areas. These regions project to the thalamus and activate the cortex during REM sleep. This cortical activation is associated with dreaming in REM. Descending excitatory fibers from the pedunculopontine nucleus and laterodorsal tegmental area innervate the medial medulla, which then sends inhibitory projections to motor neurons producing the skeletal muscle atonia of REM sleep.20-21
There are two distinct neural systems which collaborate in the “paralysis” of normal REM sleep, one is mediated through the active inhibition by neurons in the nucleus reticularis magnocellularis in the medulla via the ventrolateral reticulospinal tract synapsing on the spinal motor neurons and the other system suppresses locomotor activity and is located in pontine region.22
Pathophysiology
REM sleep contains two types of variables, tonic (occurring throughout the REM period), and phasic (occurring intermittently during a REM period). Tonic elements include desynchronized EEG and somatic muscle atonia (sparing the diaphragm). Phasic elements include rapid eye movements, middle ear muscle activity and extremity twitches. The tonic electromyogram suppression of REM sleep is the result of active inhibition of motor activity originating in the perilocus coeruleus region and terminating in the anterior horn cells via the medullary reticularis magnocellularis nucleus.
In RBD, the observed motor activity may result from either impairment of tonic REM muscle atonia or from increase phasic locomotor drive during REM sleep. One mechanism by which RBD results is the disruption in neurotransmission in the brainstem, particularly at the level of the pedunculopontine nucleus.23Pathogenetically, reduced striatal dopaminergic mediation has been found24-25 in those with RBD. Neuroimaging studies support dopaminergic abnormalities.
Types of RBD
RBD can be categorized based on severity:
Mild RBD occurring less than once per month,
Moderate RBD occurring more than once per month but less than once per week, associated with physical discomfort to the patient or bed partner, and
Severe RBD occurring more than once per week, associated with physical injury to patient or bed partner.
RBD can be categorized based on duration:
Acute presenting with one month or less,
Subacute with more than one month but less than 6 months,
Chronic with 6 months or more of symptoms prior to presentation.
Acute RBD: In 55 - 60% of patients with RBD the cause is unknown, but in 40 - 45% the RBD is secondary to another condition. Acute onset RBD is almost always induced or exacerbated by medications (especially Tri-Cyclic Antidepressants, Selective Serotonin Reuptake Inhibitors, Mono-Amine Oxidase Inhibitors, Serotonin Norepinephrine Reuptake Inhibitors,26 Mirtazapine, Selegiline, and Biperiden) or during withdrawal of alcohol, barbiturates, benzodiazepine or meprobamate. Selegiline may trigger RBD in patients with Parkinson disease. Cholinergic treatment of Alzheimer’s disease may trigger RBD.
Chronic RBD: The chronic form of RBD was initially thought to be idiopathic; however long term follow up has shown that many eventually exhibit signs and symptoms of a degenerative neurologic disorder. One recent retrospective study of 44 consecutive patients diagnosed with idiopathic RBD demonstrated that 45% (20 patients) subsequently developed a neurodegenerative disorder, most commonly Parkinson disease (PD) or Lewy body dementia, after a mean of 11.5 years from reported symptoms onset and 5.1 years after RBD diagnosis.27
The relationship between RBD and PD is complex and not all persons with RBD develop PD. In one study of 29 men presenting with RBD followed prospectively, the incidence of PD was 38% at 5 years and 65% after 12 years.7, 28, 29 Contrast this with the prevalence of the condition in multiple system atrophy, where RBD is one of the primary symptoms occurring in 90% of cases.9 In cases of RBD, it is absolutely necessary not only to exclude any underlying neurodegenerative disease process but also to monitor for the development of one over time in follow up visits.
Clinical manifestations
Sufferers of RBD usually present to the doctor with complaints of sleep related injury or fear of injury as a result of dramatic violent, potentially dangerous motor activity during sleep. 96% of patients reporting harm to themselves or their bed partner. Behaviors during dreaming described include talking, yelling, swearing, grabbing, punching, kicking, jumping or running out of the bed. One clinical clue of the source of the sleep related injury is the timing of the behaviors. Because RBD occurs during REM sleep, it typically appears at least 90 minutes after falling asleep and is most often noted during the second half of the night when REM sleep is more abundant.
One fourth of subjects who develop RBD have prodromal symptoms several years prior to the diagnosis. These symptoms may consist of twitching during REM sleep but may also include other types of simple motor movements and sleep talking or yelling.30-31 Day time somnolence and fatigue are rare because gross sleep architecture and the sleep-wake cycle remain largely normal.
RBD in other neurological disorders and Narcolepsy:
RBD has also been reported in other neurologic diseases such as Multiple Sclerosis, vascular encephalopathies, ischemic brain stem lesions, brain stem tumors, Guillain-Barre syndrome, mitochondrial encephalopathy, normal pressure hydrocephalus, subdural hemorrhage, and Tourette’s syndrome. In most of these there is likely a lesion affecting the primary regulatory centers for REM atonia.
RBD is particularly frequent in Narcolepsy. One study found 36% pts with Narcolepsy had symptoms suggestive of RBD. Unlike idiopathic RBD, women with narcolepsy are as likely to have RBD as men, and the mean age was found to be 41 years.32 While the mechanism allowing for RBD is not understood in this population, narcolepsy is considered a disorder of REM state disassociation. Cataplexy is paralysis of skeletal muscles in the setting of wakefulness and often is triggered by strong emotions such as humor. In narcoleptics who regularly experienced cataplexy, 68% reported RBD symptoms, compared to 14% of those who never or rarely experienced cataplexy.32-33 There is evidence of a profound loss of hypocretin in the hypothalamus of the narcoleptics with cataplexy and this may be a link that needs further investigation in the understanding of the mechanism of RBD in Narcolepsy with cataplexy. It is prudent to follow Narcoleptics and questioned about symptoms of RBD and treated accordingly, especially those with cataplexy and other associated symptoms.
Diagnostic criteria for REM Behavior Disorder(ICSD-2: ICD-9 code: 327.42)1
A. Presence of REM sleep without Atonia: the EMG finding of excessive amounts of sustained or intermittent elevation of submental EMG tone or excessive phasic submental or (upper or lower) limb EMG twitching (figure 1 and 2).
B. At least one of the following is present:
i. Sleep related injurious, potentially injurious, or disruptive behaviors by history
ii. Abnormal REM sleep behaviors documented during polysomnographic monitoring
C. Absence of EEG epileptiform activity during REM sleep unless RBD can be clearly distinguished from any concurrent REM sleep-related seizure disorder.
D. The sleep disturbance is not better explained by another sleep disorder, medical or neurologic disorder, mental disorder, medication use, or substance use disorder.
Differential diagnosis
Several sleep disorders causing behaviors in sleep can be considered in the differential diagnosis, such as sleep walking (somnambulism), sleep terrors, nocturnal seizures, nightmares, psychogenic dissociative states, post-traumatic stress disorder, nocturnal panic disorder, delirium and malingering. RBD may be triggered by sleep apnea and has been described as triggered by nocturnal gastroesophageal reflux disease.
Evaluation and Diagnosis
Detailed history of the sleep wake complaints
Information from a bed partner is most valuable
Thorough medical, neurological, and psychiatric history and examination
Screening for alcohol and substance use
Review of all medications
PSG (mandatory): The polysomnographic study should be more extensive, with an expanded EEG montage, monitors for movements of all four extremities, continuous technologist observation and continuous video recording with good sound and visual quality to allow capture of any sleep related behaviors
Multiple Sleep Latency Test (MSLT): Only recommended in the setting of suspected coexisting Narcolepsy
Brain imaging (CT or MRI) is mandatory if there is suspicion of underlying neurodegenerative disease.
Management
RBD may have legal consequences or can be associated with substantial relationship strain; therefore accurate diagnosis and adequate treatment is important, which includes non-pharmacological and pharmacological management.
Non-pharmacological management: Acute form appears to be self-limited following discontinuation of the offending medication or completion of withdrawal treatment. For chronic forms, protective measures during sleep are warranted to minimize the risks for injury to patient and bed partner. These patients are at fall risk due to physical limitations and use of medications. Protective measure such as removing bed stands, bedposts, low dressers and applying heavy curtains to windows. In extreme cases, placing the mattress on the floor to prevent falls from the bed has been successful.
Pharmacological management: Clonazepam is highly effective in treatment and it is the drug of choice. A very low dose will resolve symptoms in 87 to 90% of patients.4, 5, 7-34 Recommended treatment is 0.5 mg Clonazepam 30 minutes prior to bed time and for more than 90% of patients this dose remains effective without tachyphylaxis. In the setting of breakthrough symptoms the dose can be slowly titrated up to 2.0 mg. The mechanism of action is not well understood but clonazepam appears to decrease REM sleep phasic activity but has no effect on REM sleep atonia.35
Melatonin is also effective and can be used as monotherapy or in conjunction with clonazepam. The suggested dose is 3 to 12 mg at bed time. Pramipexole may also be effective36-38 and suggested for use when clonazepam is contraindicated or ineffective. It is interesting to note that during holidays from the drug, the RBD can take several weeks to recur. Management of patients with concomitant disorder like narcolepsy, depression, dementia, Parkinson disease and Parkinsonism can be very challenging, because medications such as SSRIs, selegiline and cholinergic medications used to treat these disorders, can cause or exacerbate RBD. RBD associated with Narcolepsy, clonazepam is usually added in management and it is fairly effective.
Follow-up
Because RBD may occur in association with neurodegenerative disorder, it is important to consult a neurologist for every patient with RBD as early as possible, especially to diagnose and provide care plan for neurodegenerative disorder, which includes but not limited to early diagnosis and management, regular follow up, optimization of management to provide better quality of life and address medico-legal issues.
Prognosis
In acute and idiopathic chronic RBD, the prognosis with treatment is excellent. In the secondary chronic form, prognosis parallels that of the underlying neurologic disorder. Treatment of RBD should be continued indefinitely, as violent behaviors and nightmares promptly reoccur with discontinuation of medication in almost all patients.
Conclusions
RBD and neurodegenerative diseases are closely interconnected. RBD often antedates the development of a neurodegenerative disorder; diagnosis of idiopathic RBD portends a risk of greater than 45% for future development of a clinically defined neurodegenerative disease. Once identified, close follow-up of patients with idiopathic RBD could enable early detection of neurodegenerative diseases. Treatment for RBD is available and effective for the vast majority of cases.
Key Points
Early diagnosis of RBD is of paramount importance
Polysomnogram is an essential diagnostic element
Effective treatment is available
Early treatment is essential in preventing injuries to patient and bed partner
Apparent idiopathic form may precede development of Neurodegenerative disorder by decades
Acute non-traumatic knee effusion is a common condition presenting to the Orthopaedic department which can be caused by a wide variety of diseases(Table 1). Septic arthritis is the most common and serious etiology. It can involve any joint; the knee is the most frequently affected. Accurate and swift diagnosis of septic arthritis in the acute setting is vital to prevent joint destruction, since cartilage loss occurs within hours of onset1,2. Inpatient mortality due to septic arthritis has been reported as between 7-15%, despite improvement in antibiotic therapy3,4. Crystal arthritis (Gout/Pseudogout) is the second most common differential diagnosis. It is often under-diagnosed and subsequently patients do not receive rheumatology referral for appropriate treatment and follow-up. In addition, some patients are misdiagnosed and treated as septic arthritis with inappropriate antibiotics. Untreated crystal-induced arthropathy has been shown to cause degenerative joint disease and disability leading to a considerable health economic burden.6,7
When the patient is systemically unwell, it is common practice to start empirical antibiotic treatment after joint aspiration for the fear of septic arthritis. This aims to minimize the risk of joint destruction while awaiting gram stain microscopy and microbiological culture results. In a persistent painful swollen knee with negative gram stain and culture, antibiotic therapy can be continued with or without arthroscopic knee washout based on clinical suspicion of infection 8.
We have therefore undertaken a retrospective study to review our management of patients with non-traumatic hot swollen knees and in particular patients with crystal-induced arthritis.
Materials and methods:
We performed a retrospective review of 180 patients presenting consecutively with acute non-traumatic knee effusion referred to the on-call Orthopaedic team in the hospital of study between November 2008 and November 2011. Sixty patients were included in the study (Table 2). There were 43 males and 17 females, with a mean age of 36 years (range, 23- 93 years).
Patient demographics, clinical presentation, co-morbidities, current medications and body temperature were recorded. The results of blood inflammatory markers (WBC, CRP), blood cultures, synovial fluid microscopy, culture and polarized microscopy were also collected. Subsequent treatment (e.g. antibiotics, surgical intervention), complications, and mortality rates were reviewed.
Results:
On presentation, a decreased range of movement was evident in all patients. Associated knee pain was reported by 55 patients (92%), and 24 patients (40%) had fever (temperature ≥ 37.5º). All joints were aspirated prior to starting antibiotics and samples were sent for gram stain microscopy, culture and antibiotic sensitivity, and polarized light microscopy.
Of the 60-patient cohort, 26 were admitted and started on intravenous antibiotics based on clinical suspicion of infection (Table 3). The median duration of inpatient admission was 4 days (range, 2 to 14 days). The median duration of antibiotic therapy was 6 days (range, 2 to 25 days). Eighteen patients were treated non-operatively by means of antibiotics and anti-inflammatory medications. Arthroscopic washout was performed in the remaining eight knees. In this group of patients, leucocyte count in the joint aspirate ranged from 0-3 leucocyte/mm3, blood leucocyte count ranged from 4-20 leucocyte/mm3, while mean CRP was 37.8 mg/l (range, 1-275 mg/l).
Review of laboratory results revealed that four patients had positive microscopic growth on gram stained films. Two samples showed staphylococcus aureus growth and two grew beta haemolytic streptococci. Eight patientshad crystals identified on polarized light microscopy of joint aspirate. Three showed monosodium urate (MSU) crystals while five had calcium pyrophosphate (CPP) crystals. They received antibiotic therapy for a mean duration of 10 days (range, 1-30 days). Two patients were taken to theatre for arthroscopic lavage. Only two patients received rheumatology referral.
Seven patients developed complications during their hospital stay. Four contracted diarrhoea; three of which had negative stool cultures but one was positive for clostridium difficile, developed toxic megacolon and died. One patient with known ischemic heart disease had a myocardial infarction and died. Two further patients acquiredurinary tract infections.
Discussion:
Acute monoarthritisof the knee joint can be a manifestation of infection, crystal deposits, osteoarthritis and a variety of systemic diseases. Arriving at a correct diagnosis is crucial for appropriate treatment 9. Septic arthritis, the most common etiology, develops as a result of haematogenous seeding, direct introduction, or extension from a contiguous focus of infection. Joint infectionis a medical emergency that can lead to significant morbidity and mortality. Mainstay of treatment comprises appropriate antimicrobial therapy and joint drainage 10,11. Literature reveals the knee is the most commonly affected joint (55%) followed by shoulder (14%) in the septic joint population12-13.
The second most common differential diagnosis is crystal-induced monoarthritis. Gout and pseudogout are the two most common pathologies 14. They are debilitating illnesses in which recurrent episodes of pain and joint inflammation are caused by the formation of crystals within the joint space and deposition of crystals in soft tissue. Gout is caused by monosodium urate (MSU) crystals, while pseudogout is inflammation caused by calcium pyrophosphate (CPP) crystals, sometimes referred to as calcium pyrophosphate disease (CPPD) 15,16. Misdiagnosis of crystals arthritis or delay in treatment can gradually lead to degenerative joint disease and disability in addition to renal damage and failure 5. The clinical picture of acute crystal-induced arthritis can sometimes be difficult to differentiate from acute septic arthritis. It is manifested by fever, malaise, raised peripheral WBC, CRP and other acute phase reactants. Synovial fluid aspirate can be turbid secondary to an increase in peripheral polymorphonuclear cells. Diagnosis can be challenging and therefore crystal identification on polarized microscopy is considered the gold standard 17, 18, 19. Rest, ice and topical analgesia may be helpful but systemic non-steroidal anti-inflammatory medications are the treatment of choice for acute attacks provided there are no contraindications 20.
In this study, all joints were aspirated and samples were sent for microscopy, culture and sensitivity, and polarized microscopy for crystals in-line with the British Society of Rheumatology and British Orthopaedic Association guidelines 8. Aspiration not only helps diagnosis but in addition reduces the pain caused by joint swelling. Twenty six patients were admitted, on clinical and biochemical suspicion of septic arthritis. They presented with acute phase response manifested by malaise, fever and raised inflammatory markers and were treated with antibiotic therapy and non steroidal anti-inflammatory medications while awaiting the results of microbiology and polarized light microscopy. Four of theses patients developed complications secondary to antibiotic therapy including death due to clostridium difficile infection and subsequent toxic megacolon.
Infection was confirmed to be underlying cause in four patients (6%) who showed positive microscopic growth on gram stained films. They underwent arthroscopic washout and continued antibiotic therapy according to the result of culture and sensitivity of their knee aspirate till their symptoms and blood markers were normal. Arthroscopic washout was required for four patients with negative microscopic growth due to persistant symptoms despite antibiotic treatment, as recommended by the British Society of Rheumatology and the British Orthopaedic Association 8. Two patients showed calcium pyrophosphate crystals on polarized microscopy and two had no bacterial growth or crystals.
We retrospectively reviewed laboratory results and found that eight patients (13%) were confirmed to have crystal arthritis as crystals (MSU/CPP) were identified in their knee aspirates by means of polarized microscopy. However, only two patients (25%) received this diagnosis whilst in hospital. In both cases, antibiotic therapy was discontinued and they were referred to a rheumatologist for appropriate treatment and follow up. The remaining six patients continued to receive antibiotics and two of them were taken to theatre for arthroscopic lavage on clinical suspicion of infection as symptoms did not improve significantly with medications.
Our study shows that crystal-induced arthritis can easily be overlooked or misdiagnosed as septic arthritis. This results in patients having unnecessary antibiotic therapy, developing serious complications and undergoing surgical procedures, all of which can be avoided. Moreover, they were not referred to a rheumatologist.
Acute knee effusion is a common presentation to the Orthopaedic department and although we seem to be providing a good service for septic arthritis, patients with crystal arthropathy are still slipping through the net. Clinicians should always remember that crystal arthritis is almost as common as septic arthritis and will eventually lead to joint damage if not managed appropriately. It must be excluded as a cause of hot swollen joints by routine analysis of joint aspirate using polarized light microscopy. If crystal arthritis is proved to be the underlying pathology, patients must be treated accordingly and receive a prompt rheumatology referral for further management.
Saliva is the watery and usually frothy substance produced in and secreted from the three paired major salivary (parotid, submandibular and sublingual) glands and several hundred minor salivary glands, composed mostly of water, but also includes electrolytes, mucus, antibacterial compounds, and various enzymes. Healthy persons are estimated to produce 0.75 to 1.5 liters of saliva per day. At least 90% of the daily salivary production comes from the major salivary glands while the minor salivary glands produce about 10%. On stimulation (olfactory, tactile or gustatory), salivary flow increases five fold, with the parotid glands providing the preponderance of saliva.1
Saliva is a major protector of the tissues and organs of the mouth. In its absence both the hard and soft tissues of the oral cavity may be severely damaged, with an increase in ulceration, infections, such as candidiasis, and dental decay. Saliva is composed of serous part (alpha amylase) and a mucus component, which acts as a lubricant. It is saturated with calcium and phosphate and is necessary for maintaining healthy teeth. The bicarbonate content of saliva enables it to buffer and produce the condition necessary for the digestion of plaque which holds acids in contact with the teeth. Moreover, saliva helps with bolus formation and lubricates the throat for the easy passage of food. The organic and inorganic components of salivary secretion have got a protective potential. They act as barrier to irritants and a means of removing cellular and bacterial debris. Saliva contains various components involved in defence against bacterial and viral invasion, including mucins, lipids, secretory immunoglobulins, lysozymes, lactoferrin, salivary peroxidise, and myeloperoxidase. Salivary pH is about 6-7, favouring digestive action of salivary enzyme, alpha amylase, devoted to starch digestion.
Salivary glands are innervated by the parasympathetic and sympathetic nervous system. Parasympathetic postganglionic cholinergic nerve fibers supply cells of both the secretory end-piece and ducts and stimulate the rate of salivary secretion, inducing the formation of large amounts of a low-protein, serous saliva. Sympathetic stimulation promotes saliva flow through muscle contractions at salivary ducts. In this regard both parasympathetic and sympathetic stimuli result in an increase in salivary gland secretions. The sympathetic nervous system also affects salivary gland secretions indirectly by innervating the blood vessels that supply the glands.
Table 1: Functions of saliva
Digestion and swallowing Initial process of food digestion Lubrication of mouth, teeth, tongue and food boluses Tasting food Amylase- digestion of starch Disinfectant and protective role Effective cleaning agent Oral homeostasis Protect teeth decay, dental health and oral odour Bacteriostatic and bacteriocidal properties Regulate oral pH Speaking Lubricates tongue and oral cavity
Drooling (also known as driveling, ptyalism, sialorrhea, or slobbering) is when saliva flows outside the mouth, defined as “saliva beyond the margin of the lip”. This condition is normal in infants but usually stops by 15 to 18 months of age. Sialorrhea after four years of age generally is considered to be pathologic. The prevalence of drooling of saliva in the chronic neurological patients is high, with impairment of social integration and difficulties to perform oral motor activities during eating and speech, with repercussion in quality of lifeDrooling occurs in about one in two patients affected with motor neuron disease and one in five needs continuous saliva elimination7, its prevalence is about 70% in Parkinson disease8, and between 10 to 80% in patients with cerebral palsy9.
Pathophysiology
Pathophysiology of drooling is multifactorial. It is generally caused by conditions resulting in
Excess production of saliva- due to local or systemic causes (table 2)
Inability to retain saliva within the mouth- poor head control, constant open mouth, poor lip control, disorganized tongue mobility, decreased tactile sensation, macroglossia, dental malocclusion, nasal obstruction.
Problems with swallowing- resulting in excess pooling of saliva in the anterior portion of the oral cavity e.g. lack of awareness of the build-up of saliva in the mouth, infrequent swallowing, and inefficient swallowing.
Drooling is mainly due to neurological disturbance and less frequently to hyper salivation.Under normal circumstances, persons are able to compensate for increased salivation by swallowing. However, sensory dysfunction may decrease a person’s ability to recognize drooling and anatomic or motor dysfunction of swallowing may impede the ability to manage increased secretion.
Depending on duration of drooling, it can be classified as acute e.g. during infections (epiglottitis, peritonsilar abscess) or chronicneurological causes.
Symptoms
Drooling of saliva can affect patient and/or their carers quality of life and it is important to assess the rate and severity of symptoms and its impact on their life.
Table 3 Effect of untreated Drooling of saliva
Physical
Psychological
Perioral chapping (skin cracking) Maceration with secondary infection Dehydration Foul odour Aspiration/ pneumonia Speech disturbance Interference with feeding
Isolation Barriers to education (damage to books or electronic devices) Increased dependency and level/intensity of care Damage to electronic devices Decreased self esteem Difficult social interaction
Assessment
Assessment of the severity of drooling and its impact on quality of life for the patient and their carers help to establish a prognosis and to decide the therapeutic regimen. A variety of subjective and objective methods for assessment of sialorrhoea have been described3.
History (from patient and carers)
Establish possible cause, severity, complications and possibility of improvement, age and mental status of patient, chronicity of problems, associated neurological conditions, timing, provoking factors, estimation of quantity of saliva – use of bibs, clothing changing required/ day and impact on the day today life (patient/carer)
Physical examination
Evaluate level of alertness, emotional state, hydration status, hunger, head posture
Examination of oral cavity- sores on the lip or chin, dental problems, tongue control, swallowing ability, nasal airway obstruction, decreased intraoral sensitivity, assessment of health status of teeth, gum, oral mucosa, tonsils, anatomical closure of oral cavity, tongue size and movement, jaw stability. Assessment of swallowing
Assess severity and frequency of drooling (as per table 4)
Investigation
Lateral neck x ray (in peritonsilar abscess)
Ultrasound to diagnose local abscess
Barium swallow to diagnose swallowing difficulties
Audiogram- to rule out conductive deafness associated with oropharyngeal conditions
Salivary gland scan- to determine functional status
Table 4 : System for assessment of frequency and severity of drooling
Drooling severity
Points
Dry (never drools)
1
Mild (wet lips only)
2
Moderate (wet lips and chins)
3
Severe (clothing becomes damp)
4
Profuse (clothing, hands, tray, object become wet)
5
Frequency
Points
Never drools
1
Occasionally drools
2
Frequency drools
3
Constantly drools
4
Other methods of assessing salivary production and drooling
1) 1- 10 visual analogue scale (where 1 is best possible and 10 is worst possible situation)
2) Counting number of standard sized paper handkerchiefs used during the day
3) Measure saliva collected in cups strapped to chin
4) Inserting pieces of gauze with a known weight into oral cavity for a specific period of time and then re-measuring weight and calculating the difference between the dry and wet weights.
6) Salivary duct canulation 12 and measuring saliva production.
Management
Drooling of saliva, a challenging condition, is better managed with a multidisciplinary team approach. The team includes primary care physician, speech therapist, occupational therapist, dentist, orthodontist, otolaryngologist, paediatrician and neurologist. After initial assessment, a management plan can be made with the patient. The person/ carer should understand the goal of treating drooling is a reduction in excessive salivary flow, while maintaining a moist and healthy oral cavity. Avoidance of xerostomia (dry mouth) is important.
There are two main approaches
Non invasive modalities e.g. oral motor therapy, pharmacological therapy
Invasive modalities e.g. surgery and radiotherapy
No single approach is totally effective and treatment is usually a combination of these techniques. The first step in management of drooling is correction of reversible causes. Less invasive and reversible methods, namely oral motor therapy and medication are usually implemented before surgery is undertaken5
Non invasive modalities
Positioningprior to implementation of any therapy, it is essential to look at the position of the patient. When seated, a person should be fully supported and comfortable. Good posture with proper trunk and head control provides the basis for improving oral control of drooling and swallowing.
Eating and drinking skills-drooling can be exacerbated by pooreating skills. Special attention and developing better techniques in lip closure, tongue movement and swallowing may lead to improvements of some extent. Acidic fruits and alcohol stimulate further saliva production, so avoiding them will help to control drooling10
Oral facial facilitation - this technique will help to improve oral motor control, sensory awareness and frequency of swallowing.Scott and staios et al 18 noted improvement in drooling in patients with both hyper and hypo tonic muscles using this technique. This includes different techniques normally undertaken by speech therapist, which improves muscle tone and saliva control. Most studies show short term benefit with little benefit in long run. This technique can be practiced easily, with no side effects and can be ceased if no benefits noted.
a) Icing – effect usually last up to 5-30 minutes. Improves tone, swallow reflex.
b) Brushing- as effect can be seen up to 20- 30 minutes, suggested to undertake before meals.
c) Vibration- improves tone in high tone muscles
d) Manipulation – like tapping, stroking, patting, firm pressure directly to muscles using fingertips known to improve oral awareness.
e) Oral motor sensory exercise - includes lip and tongue exercises.
Speech therapy-speech therapy should be started early to obtain good results. The goal is to improve jaw stability and closure, to increase tongue mobility, strength and positioning, to improve lip closure (especially during swallowing) and to decrease nasal regurgitation during swallowing.
Behaviour therapy-this uses a combination of cueing, overcorrection, and positive and negative reinforcement to help drooling. Suggested behaviours, like swallowing and mouth wiping are encouraged, whereas open mouth and thumb sucking are discouraged. Behavior modification is useful to achieve (1) increased awareness of the mouth and its functions, (2) increased frequency of swallowing, (3) increased swallowing skills. This can be done by family members and friends. Although there is no randomized controlled trial done, over 17 articles published in last 25 years, show promising results and improved quality of life. No reported side effects make behavioural interventions an initial option compared to surgery, botulinum toxin or pharmaceutical management. Behaviour interventions are useful prior and after medical management such as botulinum toxin or surgery.
Oral prosthetic device- variety of prosthetic devices can be beneficial, e.g. chin cup and dental appliances, to achieve mandibular stability, better lip closure, tongue position and swallowing. Cooperation and comfort of the patient is essential for better results.
Pharmacological methods
Systematic review of anticholinergic drugs, show Benztropine, Glycopyrrolate, and Benzhexol Hydrochloride, as being effective in the treatment of drooling. But these drugs have adverse side-effects and none of the drugs been identified as superior.
Hyoscine- The effect of oral anticholinergic drugs has been limited in the treatment of drooling. Transdermal scopolamine (1.5 mg/2.5 cm2) offers advantages. One single application is considered to render a stable serum concentration for 3 days. Transdermal scopolamine has been shown to be very useful in the management of drooling, particularly in patients with neurological or neuropsychiatric disturbances or severe developmental disordersIt releases scopolamine through the skin into the bloodstream.
Glycopyrrolatestudies have shown 70-90% response rates but with a high side effect rate. Approximately 30-35% of patients choose to discontinue due to unacceptable side effects such as excessive dry mouth, urinary retention, decreased sweating, skin flushing, irritability and behavior changes. A study on 38 patients with drooling due to neurological deficits had shown up to a 90% response rateMier et al21 reported Glycopyrrolate to be effective in the control of excessive sialorrhea in children with developmental disabilities. Approximately 20% of children given glycopyrrolate may experience substantial adverse effects, enough to require discontinuation of medication.
Antimuscarinic drugs, such as benzhexol, have also been used, but limited due to their troublesome side effects.
Antireflux Medication: The role of antireflux medication (Ranitidine & Cisapride) in patients with gastro esophageal reflux due to esophageal dysmotility and lower esophageal tone did not show any benefits in a study 21.
Modafinil - One case study noticed decreased drooling in two clients who were using the drug for other reasons, but no further studies have been done.
Alternate medications: (Papaya and Grape seed extract) – Mentioned in literature as being used to dry secretions but no research in to their efficacy has been conducted.
Botulinum toxin It was in 1822 that a German poet and physician, Justinus Kerner, discovered that patients who suffered from botulism complained of severe dryness of mouth which suggested that the toxin causing botulism could be used to treat hypersalivation. However, it was only in the past few years that botulinum toxin type A (BTx-A)has been used for this purpose. BTx-A binds selectively to cholinergic nerve terminals and rapidly attaches to acceptor molecules at the presynaptic nerve surface. This inhibits release of acetylcholine from vesicles, resulting in reduced function of parasympathetic controlled exocrine glands. The blockade though reversible is temporary as new nerve terminals sprout to create new neural connections. Studies have shown that injection of botulinum toxin to parotid and submandibular glands, successfully subsided the symptoms of drooling 30,31. Although there is wide variation in recommended dosage, most studies suggest that about 30- 40 units of BTx-A injected into the parotid and submandibular glands are enough for the symptoms to subside The injection is usually given under ultrasound guidance to avoid damage to underlying vasculature/ nerves. The main side effects from this form of treatment are dysphagia, due to diffusion into nearby bulbar muscles, weak mastication, parotid gland infection, damage to the facial nerve/artery and dental caries.
Patients with neurological disorders who received BTX-A injections showed a statistically significant effect from BTX-A at 1 month post injection, compared with control, this significance was maintained at 6 months. Intrasalivary gland BTX-A was shown to have a greater effect than scopolamine.
The effects of BTx-A are time limited and this varies between individuals.
Invasive modalities
Surgerycan be performed to remove salivary glands, (most surgical procedures focused on parotid and submandibular glands). ligate or reroute salivary gland ducts, or interrupt parasympathetic nerve supply to glands. Wilke, a Canadian plastic surgeon, was the first to propose and carry out parotid duct relocation to the tonsillar fossae to manage drooling in patients with cerebral palsy. One of the best studied procedures, with a large number of patients and long term follow up data, is submandibular duct relocation 32, 33.
Intraductal laser photocoagulation of the bilateral parotid ducts has been developed as a less invasive means of surgical therapy. Early reports have shown some impressive results34.
Overall surgery reducedsalivary flow and drooling can be significantly improved often with immediate results – 3 studies noted that 80 – 89% of participants had an improvement in their control of their saliva. Two studies discussed changes in quality of life. One of these found that 80% of those who participated improved across a number of different measures including receiving affection from others and opportunities for communication and interaction. Most evidence regarding surgical outcomesof sialorrhea management is low quality and heterogeneous. Despitethis, most patients experience a subjective improvement followingsurgical treatment 36.
Radiotherapy - to major salivary glands in doses of 6000 rad or more is effective Side effects which include xerostomia, mucositis, dental caries, osteoradionecrosis, may limit its use.
Key messages
Chronic drooling can pose difficulty in management
Early involvement of Multidisciplinary team is the key.
Combination of approach works better
Always start with noninvasive, reversible, least destructive approach
Surgical and destructive methods should be reserved as the last resort.
Lumbar punctures are commonly performed by both medical and anaesthetic trainees but in different contexts. Medically performed lumbar punctures are often used to confirm a diagnosis (meningitis, subarachnoid haemorrhage) whilst lumbar puncture performed by anaesthetists are usually a precedent to the injection of local anaesthetics into cerebrospinal fluid for spinal anaesthesia. The similarity relies on the fact that both involve the potential for iatrogenic infection into the subarachnoid space. The incidence of iatrogenic infection is very low in both fields; a recent survey by the Royal College of Anaesthetists1 reported an incidence of 8/707 000 whilst there were only approximately 75 cases in the literature after ‘medical’ lumbar puncture.2 However, the consequences of iatrogenic infection can be devastating. It is likely that appropriate infection control measures taken during lumbar puncture would reduce the risk of bacterial contamination. The purpose of the present study is to compare infection control measures taken by anaesthetic and medical staff when performing lumbar puncture.
Method
A survey was constructed online (www.surveymonkey.com) and sent by email to 50 anaesthetic and 50 acute medical trainees in January 2011. All participants were on an anaesthetic or medical training programme and all responses were anonymous. The survey asked whether trainees routinely used the following components of an aseptic technique3 when performing lumbar puncture:
Sterile trolley
Decontaminate hands
Clean patient skin
Apron/gown
Dressing pack
Non-touch technique
Sterile gloves
No ethical approval was sought as the study was voluntary and anonymous.
Results
The overall response rate was 71% (40/50 anaesthetic trainees and 31/50 medical). All anaesthetic trainees routinely used the components of an aseptic technique when performing lumbar puncture. All medical trainees routinely cleaned the skin, decontaminated their hands and used a non-touch technique but only 80.6% used sterile gloves. 61.3% of medical trainees used a sterile trolley, 38.7% used an apron/gown and 77.4% used a dressing pack.
Discussion
This survey shows that adherence to infection control measures differ between anaesthetic and medical trainees when performing lumbar puncture. The anaesthetic trainees have a 100% compliance rate compared to 80% for the medical trainees for all components of the aseptic technique. Both groups routinely cleaned the patient’s skin, decontaminated their hands and used a non-touch technique. However, there were significant differences in the use of other equipment, with fewer medical trainees using sterile gloves, trolleys, aprons and dressing packs.
Although the incidence of iatrogenic infection after lumbar puncture is low, it is important to contribute to this low incidence by adopting an aseptic technique. There may be differences with regards to the risks of iatrogenic infection between anaesthetic and medical trainees. Anaesthetic lumbar punctures involve the injection of a foreign substance (local anaesthesia) into the cerebrospinal fluid and may therefore carry a higher risk. Crucially however, both anaesthetic and medical lumbar punctures involve accessing the subarachnoid space with medical equipment and so the risk is present.
There are many reasons for the differing compliance rates between the two specialties. Firstly, anaesthetic trainees perform lumbar punctures in a dedicated anaesthetic room whilst the presence of ‘procedure/treatment rooms’ is not universal on medical wards. Secondly, anaesthetic trainees will always have a trained assistant present (usually an operating department practitioner, ODP) who can assist with preparing equipment such as dressing trolleys.
The mechanism of iatrogenic infection during lumbar puncture is not completely clear.4 The source of microbial contamination could be external (incomplete aseptic technique, infected equipment) or internal (bacteraemia in the patient); the fact that a common cause of iatrogenic meningitis are viridans streptococcus strains5 (mouth commensals) supports the notion that external factors are relevant and an aseptic technique is important.
It is very likely that improved compliance amongst acute medical trainees would result from a dedicated treatment room on medical wards, but this is likely to involve financial and logistical barriers. The introduction of specific ‘lumbar puncture packs’, which include all necessary equipment (e.g. cleaning solution, aprons, sterile gloves) may reduce the risk of infection; the introduction of a specific pack containing equipment for central venous line insertion reduced colonisation rates from 31 to 12%.6 The presence of trained staff members to assist medical trainees when performing lumbar puncture may assist in improved compliance, similar to the role of an ODP for anaesthetic trainees.
The main limitation of this study is that the sample size is small. However, we feel that this study raises important questions as to why there is a difference in infection control measures taken by anaesthetic and medical trainees; it may be that the environment in which the procedure takes place is crucial and further work on the impact of ‘procedure rooms’ on medical wards is warranted.
Non-attendance in outpatient clinics accounts for a significant wastage of health service resources. Psychiatric clinics have high non-attendance rates and failure to attend may be a sign of deteriorating mental health. Those who miss psychiatric follow-up outpatient appointments are more ill with poor social functioning than those who attend (1). They have a greater chance of drop out from clinic contact and subsequent admission (1). Non-attendance and subsequent loss to follow up indicate possible risk of harm to the patient or to others (2).
Prompts to encourage attendance at clinics are often used and may take the form of reminder letters (3), telephone prompting(4) and financial incentives (5). Issuing a copy of the referral letter to the appointee may prompt attendance for the initial appointment (6). Contacting patients by reminder letters prior to their appointments has been effective in improving attendance rates in a number of settings, including psychiatric outpatient clinics and community mental health centres (3).
Studies investigating the efficacy of prompting for improving attendance have generated contrasting findings and non-attendance remains common in clinical practice. We, therefore, carried out a naturalistic, prospective controlled study to investigate whether reminder letters would improve the rate of attendance in a community-based mental health outpatient clinic.
Design and Methods
The study was carried out at the Community Mental Health Centres based in Runcorn and Widnes in Cheshire, UK. The community mental health team (CMHT) provides specialist mental health services for adults of working age. Both CMHTs are similar in demographics, socio-economic need and, have relatively higher non-attendance rates in the clinic. In the week prior to the appointment, clerical staff from community mental health team sent a standard letter to some patients reminding the date and time of the appointment and name of the consulting doctor. They recorded whether patient attended, failed to attend or cancelled the appointment irrespective of whether they had received a reminder letter or not.
We compared the attendance rates between experimental group (those who had received the reminder letters) and the control group ( those who had not received the reminder letters) over a period of 18 months. Throughout the study period, the same medical team held the clinics and there had been no major change in the outpatients’ clinic setting or administrative and procedural changes influencing outpatients’ attendance. Care Planning Approach (CPA) was implemented and in operation even before the introduction of reminding letters at both the sites.
Attendance rates for all the clinics held during the study period were obtained from medical records. For all subjects who failed to attend, age and gender, was obtained from patients’ database. Patients whose appointments were cancelled were also included in the study.
Statistics and Data analysis
The data was analysed using SISA - Simple Interactive Statistical Analysis (7). Chi -squared tests were used to investigate the attendance rates between the groups, new patients and follow-ups, with the P value for statistical significance set at 0.05. Odds ratios were calculated to measure the size of the effect. In addition, we examined how age and gender may have influenced the effect of the text based prompting on attendance.
Results
In the experimental group a total of 114 clinics were booked, with clinic lists totalling 843 patients. Of these, 88 were new referrals and 755 were follow-up appointments. 65 of 114 clinics had full attendance. A total of 228 patients failed to attend the clinic. Of those who failed to attend, 25 patients were new referrals and 203 were follow-up patients. 28 follow up patients and 2 patients newly referred to the team called to cancel their appointments.
In the control group, a total of 71 clinics were booked amounting to a total of 623 patients. Of these, 86 were new referrals and 537 were for follow-up patients. Only 25 out of 71 clinics had full attendance. A total of 211 patients failed to attend. Of those who failed to attend, 32 were new referrals and 179 were follow-up patients. 55 follow up patients and 13 patients newly referred to the team called to cancel their appointments.
Of those who failed to attend in the experimental group, 98 (43%) were women. The mean age of non-attendees was 38 years; with a range of 18-76 yrs .Of those who failed to attend in the control group110 (52%) were women. The mean age of non-attendees was 32 years; with a range of 19-70 yrs.
In our study, failure to attend was not distributed evenly but had seasonal peaks at Christmas and during the summer vacation period.
The outcome from prompting in the experimental group is compared with the control group and displayed in Table 1.
Outcomes
Control group n (%)
Experimental group n (%)
χ2 (df)
P
OR (CI)
No of clinics with full attendance
25
65
8.32
0.0039
2.44(1.32-4.50
Total No of Pts attended
344
585
15.05
0.0001
1.57(1.25-1.98)
No of new Pts attended
41
61
3.743
0.053
1.9 (0.98-3.67)
No of follow up Pts attended
303
524
11.39
0.0007
1.52(1.19-1.94)
No of Cancellations
68
30
38.63
0
3.85(2.46-6.04)
χ2 = Chi square, df = degree of freedom, OR= Odds Ratio, CI= Confidence Interval
The attendance rate in the experimental group was 71.95% (585/813) as opposed to 56.57% (344/555) in the control group (OR=1.57; p=0.0001).
The attendance rate for new patients in the experimental group was 70.9%( 61/88) as opposed to 56.16 %( 41/ 86) in the control group (OR=1.9; p=0.053).
The attendance rate for follow up patients in the experimental group was 72.0%( 524/727) and 62.8% (303/482) in the control group (0R=1.52; p=0.0007).
In addition, there were significantly more (by 22%) number of clinics with full attendance in the experimental group (OR= 2.44, P=0.003).
The observed difference was not influenced by patient’s age or gender.
Discussion
The results from this study confirm previous findings that reminder letters within a week before the appointment can improve attendance rates in community mental health clinics. Our results are similar to those of the Cochrane systematic review, which has suggested that a simple prompt in the days just before the appointment could indeed encourage attendance (8).
Although it has been reported elsewhere(8) that text based prompting increases the rate at which patients keep their initial appointments, our study did not show a similar result for new patients.
It is already demonstrated that new patients and follow-up patients in psychiatric clinics are distinct groups with different diagnostic profiles, degrees of mental illness and with different reasons for non-attendance. Follow-up patients are severely ill, socially impaired and isolated than new patients. (1). Forgetting the appointment and being too unwell are the most common reasons given for non-attendance by follow-up patients, while being unhappy with the referral, clinical error and being too unwell are the most common reasons in the new patient groups (1). In addition, it has also been observed that increased rate at which patients keep their first appointments is more likely related to factors other than simple prompting (4) This explains our finding that prompting was more beneficial for follow-up patients as opposed to new referrals to the Community Mental Health Team.
We also identified several patients with severe mental illness who ‘did not attend’ for three successive outpatient appointments. Their care plans were reviewed and arrangements made to follow up with their community psychiatric nurses as domiciliary visits at regular intervals. Such measures should reduce duplication of the services and shorten the waiting times for psychiatric consultation, which are well-recognised factors associated with non-attendance (9).
Non-attendance is an index of severity of mental illness and a predictor of risk (1). In addition to reminder letters, telephone prompts are also known to improve attendance (4). Successful interventions to improve attendance may be labour intensive but they can be automated and, ultimately, prove cost effective (8)
We noticed that there is limited research and lack of quality randomised controlled trials in the area of non-attendance and the effectiveness of intervention to improve attendance in mental health setting. More large, well-designed randomised studies are desirable. We also recommend periodic evaluation of outpatient non-attendance in order to identify high-risk individuals and implement suitable measures to keep such severely mentally ill patients engaged with the services.
There was no randomisation in this study and we relied on medical records. We have not directly compared the characteristics of non-attendees with those patients who did attend the clinics. We did not evaluate other clinical and socio-demographic factors (e.g. travelling distance, financial circumstances, etc) that are known to influence the attendance rates in mental health setting. Hence, there may be limitations in generalising the results beyond similar populations with similar models of service provision.
The advent of benzodiazepines in the late fifties was met with great excitement by the practicing physicians around the world. Their range of actions – sedative/hypnotic, anxiolytic, anticonvulsant and muscle relaxant – combined with low toxicity and alleged lack of dependence potential seemed to make them ideal medications for many common conditions. The drugs were prescribed long term, often for many years, for complaints such as anxiety, depression, insomnia and ordinary life stressors. They began to replace barbiturates; drugs known to be dangerous in overdose, which tended to cause addiction and were associated with troublesome side-effects. Previous compounds including opium, alcohol, chloral and bromides were similarly burdened.
The first benzodiazepine, chlordiazepoxide (Librium), was synthesized in 1955 by Leo Sternbach while working at Hoffmann–La Roche on the development of tranquilizers. The compound showed very strong sedative, anticonvulsant and muscle relaxant effects when submitted for a standard battery of animal tests. These impressive clinical findings led to its speedy introduction throughout the world in 1960 under the brand name Librium. Following chlordiazepoxide, diazepam was marketed by Hoffmann–La Roche under the brand name Valium in 1963.
The benefits of benzodiazepines and the apparent lack of discouraging factors led an alarming rise of benzodiazepine prescriptions. In the late 1970s benzodiazepines became the most commonly prescribed of all drugs in the world.1 In1980, Tyrer reported that each day about 40 billion doses of benzodiazepine drugs are consumed throughout the world.3 This figure is staggering by any standards. However, towards the end of the 1970s, awareness begin to grow that benzodiazepines were being unnecessarily over-prescribed and it was noticed that certain patients might become dependent on benzodiazepines after chronic use.4 In particular, patients found it difficult to stop taking benzodiazepines because of withdrawal reactions and many complained that they had become ‘addicted’. Several investigations showed quite unequivocally that benzodiazepines could produce pharmacological dependence in therapeutic dosage.5-9
In 1988, the Committee of Safety of Medicines reacted to the concerns by spelling out emphatic guidelines about the use of benzodiazepines drugs. For anxiety and insomnia, benzodiazepines are indicated for short term relief (two to four weeks) only if the condition is severe, disabling and subjecting the individual to extreme distress.10
Tolerance and dependence
Tolerance is a phenomenon that develops with many chronically used drugs. The body responds to the continued presence of the drug with a series of adjustments that tend to overcome the drug effects. In the case of benzodiazepines, compensatory changes occur in the GABA and benzodiazepine receptors which become less responsive, so that the inhibitory actions of the GABA and benzodiazepines are decreased. As a result, the original dose of the drug has progressively less effect and a higher dose is required to obtain the original effect.
Dependence is understood to be the inability to control intake of a substance to which one is addicted. It encompasses a range of features initially described in connection with alcohol abuse, now recognised as a syndrome (see box 1) associated with a range of substances.
Dependence has two components: psychological dependence, which is the subjective feeling of loss of control, cravings and preoccupation with obtaining the substance; and physiological dependence, which is the physical consequences of withdrawal and is specific to each drug. For some drugs (e.g. alcohol) both psychological and physiological dependence occur; for others (e.g. LSD) there are no marked features of physiological dependence.
Box 1: Dependence Syndrome*
Three or more of the following manifestations should have occurred together for at least one month or if persisting for periods of less than one month then they have occurred together repeatedly within a twelve month period.
A strong desire or sense of compulsion to take the substance.
Impaired capacity to control substance-taking behaviour in terms of onset, termination or level of use, as evidenced by: the substance being often taken in larger amounts or over a longer period than intended, or any unsuccessful effort or persistent desire to cut down or control substance use.
A physiological withdrawal state (see F1x.3 and F1x.4) when substance use is reduced or ceased, as evidenced by the characteristic withdrawal syndrome for the substance, or use of the same (or closely related) substance with the intention of relieving or avoiding withdrawal symptoms.
Evidence of tolerance to the effects of the substance, such that there is a need for markedly increased amounts of the substance to achieve intoxication or desired effect, or that there is a markedly diminished effect with continued use of the same amount of the substance.
Preoccupation with substance use, as manifested by: important alternative pleasures or interests being given up or reduced because of substance use; or a great deal of time being spent in activities necessary to obtain the substance, take the substance, or recover from its effects.
Persisting with substance use despite clear evidence of harmful consequences, as evidenced by continued use when the person was actually aware of, or could be expected to have been aware of the nature and extent of harm.
* ICD 10 Classification of Mental and Behaviour disorder, online version 2007.
Withdrawal syndrome and discontinuation syndrome
Any drug consumed regularly and heavily can be associated with withdrawal phenomenon on stopping. Clinically significant withdrawal phenomena occur in dependence to alcohol, benzodiazepines, opiates and are occasionally seen in cannabis, cocaine and amphetamine use. In general, drugs with a short half-life will give rise to more rapid but more transient withdrawal.
Discontinuation syndrome is a common phenomenon and occurs with all classes of antidepressants. It is only experienced when one tries to discontinue its use. The most common symptoms are dizziness, vertigo, gait instability, nausea, fatigue, headaches, anxiety and insomnia. Less commonly shock-like sensations, paraesthesia, visual disturbances, diarrhoea and flu-like symptoms have been reported. Symptoms usually begin 2-5 days after SSRI discontinuation or dose reduction. The duration is variable (one to several weeks) and ranges from mild to moderate intensity in most patients, to extremely distressing in a small number. Tapering antidepressants at the end of treatment, rather than abrupt stoppage, is recommended as standard practice by several authorities and treatment guidelines11-13.
The terms ‘antidepressant withdrawal syndrome’ and ‘antidepressant discontinuation syndrome’ are used interchangeably in the literature. ‘Discontinuation’ is preferred as it does not imply that antidepressants are addictive or cause a dependence syndrome. The occurrence of withdrawal symptoms does not in itself indicate that a drug causes dependence as defined in ICD 10 (World Health Organisation 1992)14 and DSM –IV (American Psychiatric Association, 1994)15.
Understanding how benzodiazepines work and their effects
For the first 15 years after the introduction of benzodiazepines, no clear picture emerged as to how these drugs might exert their psychotropic effects. The great breakthrough in our understanding in the mechanism of action of benzodiazepines came in the mid 1970s when biologists at Hoffman-La Roche demonstrated that benzodiazepines exert their psychotropic effects by potentiating GABA neurotransmission.16
GABA, Gamma-Amino butyric acid, is the most important inhibitory neurotransmitter in the mammalian brain representing about 30% of all synapses in the whole brain. GABAergic neurones mediate pre-synaptic inhibition by depressing the release of neurotransmitter at excitatory input synapse, and post-synaptic inhibition by depressing synaptic excitation of the principal neuron. When benzodiazepines react at their receptor site, which is actually situated on the GABA receptor, the combination acts as a booster to the actions of GABA making the neuron more resistant to excitation. Several studies showed that benzodiazepines were able to facilitate both types of inhibition, indicating that the effects of the benzodiazepines were in fact due to an interaction with the GABAergic transmission process17-19.
Various subtypes of benzodiazepine receptors have slightly different actions. Alpha 1 is responsible for sedative effects. Alpha 2 exerts anxiolytics effects. Alpha 1, Alpha 2 and Alpha 5 are responsible for anticonvulsant effects. As a consequence of the enhancement of GABA’s inhibitory activity caused by benzodiazepines, the brain’s output of excitatory neurotransmitters including norepinephrine, serotonin, dopamine and acetylcholine is reduced.
The studies on the receptor binding of benzodiazepines and the subsequent changes that occur in the central nervous system have provided us with an adequate explanation for some or all of the actions of benzodiazepines, which are listed in Box 2.
Box 2: Four principle biological properties of benzodiazepines
Anxiolytic and behavioural inhibition – The anxiolytic effect is seen in animals as an increase of those behavioural responses that are suppressed experimentally by punishment or which are absent because of innate aversion20-23.
Anticonvulsant – Benzodiazepines are most potent against chemically induced epileptiform activities. At higher doses most, but not all, benzodiazepines also prevent seizures induced by electric shock24.
Sedative/hypnotic – These effects of benzodiazepines are most easily observed as a decrease of spontaneous locomotor activity in rodents placed in an observation chamber. Benzodiazepines will shorten sleep latency (amount of time taken to fall asleep after the lights have been switched off) which can be demonstrated by electroencephalogram25.
Muscle relaxant - Common tests on rodents show that benzodiazepines impair performance at motor performance tasks for example the rodent’s ability to balance on a rotating drum. The cat shows marked ataxia at after relatively low doses25.
What are benzodiazepines used for?
Sleep disorders
The benzodiazepines are used widely in the treatment of sleep disorders and many have been developed and licensed for this purpose. They are mainly known as hypnotic drugs (sleeping pills) because insomnia is the main target use. Certain factors are important in determining the choice of the hypnotic drug. Ideally, the hypnotic should be effective at inducing sleep in the individual, and should enhance objective and subjective elements of sleep. It should have a fast onset with minimal side effects and the absence of withdrawal symptoms.
The early benzodiazepine hypnotics were drugs such as nitrazepam and flurazepam. After their introduction, it was found that they had half-lives of more than a day, and individuals suffered undesirable effects such as sedation, ataxia or amnesia during the day. This was problematic especially for those individuals who needed to drive or operate machinery. Another consequence was of falls with subsequent hip fractures in the elderly population because, due to slower metabolism, they accumulated raised plasma levels of the drug. For these reasons, benzodiazepines with shorter half lives were developed so that plasma levels fall below the functional threshold concentration by the next morning.
The first of the shorter half-life benzodiazepine hypnotics to be introduced were temazepam and triazolam. Temazepam has a half-life of 5 hours and is commonly used in primary, secondary and tertiary settings for insomnia. A possible drawback of very short half-life hypnotics is rebound insomnia. This is a state of worsening sleep which commonly follows discontinuation of a regularly used hypnotic.
An important point to note is that although the subjective efficacies of benzodiazepines are widely reported, the use of polysomnography (a sleep study that involves recording a variety of physiological measures including electroencephalograph, electro-oculogram and electromyogram) has shown that sleep architecture in individuals with insomnia is not normalised by benzodiazepines. The increase in sleep duration can be accounted for by an increase in the time spent in stage 2 of sleep, while the amount of time spent in slow-wave sleep (deep) and REM (rapid eye movement) is actually decreased26.
Anxiety disorders
It can be argued that the benzodiazepines are probably the most efficacious and best tolerated pharmacological treatments of anxiety. Numerous studies, many of them conducted under stringent double-blind conditions, have consistently shown that benzodiazepines produce significantly more improvement than placebo in both somatic and emotional manifestations of anxiety27-29.
Before the introduction of benzodiazepines, anxiety disorders were treated either with the barbiturates or related drugs such as meprobomate and glutethimide. These agents were highly likely to be abused and led to a great deal of dependence. Moreover, they were toxic in overdose and fatalities were high in populations using them. The improved efficacy and safety profile of benzodiazepines, aided by intense campaigns to restrict use of barbiturate-type drugs, meant they rapidly became the first choice drugs for anxiety within a few years of them being introduced.
Much clinical practice and opinion suggests that benzodiazepine can be used as first-line treatment for acute anxiety episodes as long as CSM guidelines are adhered to. For more intractable conditions such as established social phobia, generalised anxiety disorder and panic disorder, they should probably be reserved for adjunctive or second-line agents.
In contrast to the treatment of sleep disorders, it is important to achieve a constant level of receptor occupation to maintain anxiolysis throughout the day. So for anxiety, compounds with longer elimination half-lives are preferred, whereas for sleep induction, short half-life drugs are favoured. The principal benzodiazepines used as anxiolytics include diazepam, chlordiazepoxide, clonazepam, lorazepam, alprazolam and oxazepam.
The use of benzodiazepines as first-line agents for anxiety has been on the decline since the 1990s. There are changing cultural and medical attitudes to the prescription of drugs for the treatment of anxiety disorders as a result of growing evidence that psychological approaches are also effective. The risks of dependence and withdrawal difficulties are problematic in a significant number of patients. Another issue is the abuse of benzodiazepines by drug addicts and diversion of legitimate supplies on to the black market. There is competition from other agents (buspirone, tricyclic antidepressants, monoamine oxidase inhibitors and selective serotonin reuptake inhibitors) which have a different side-effect profile and are free from dependence/withdrawal problems.
Seizure Disorders
The anti-convulsant effects of benzodiazepines find their greatest clinical use in the acute control of seizures. Diazepam, clonazepam and lorazepam have all been used in the treatment of status epilepticus.
Status epilepticus is a life-threatening condition in which the brain is in a state of continuous seizure activity which can result in impaired respiration, hypoxic brain damage and brain scarring. It is a medical emergency that requires quick and effective intervention.
Diazepam was reported to be effective for the treatment of status-epilepticus in the mid-1960s 30-32 and is still widely considered to be the drug of choice for the initial control of seizures. Given intravenously, diazepam has a rapid onset of clinical activity achieving cessation of the seizure within 5 minutes of injection in 80% of the patients in one studyWhere facilities for resuscitation are not immediately available; diazepam can be administered as a rectal solution.
Although intravenous diazepam is effective for status epilepticus, it is associated with a high risk of thrombophlebitis which is why BNF suggests use of intravenous lorazepam. Lorazepam is also highly activeIts onset of action is rapid but because of its slower rate of tissue distribution, its anticonvulsant activity is prolonged compared to diazepam35,36.
Gestaut et al (1971) showed that clonazepam was an even more potent anti-convulsant than diazepam in the treatment of status epilepticusIt can be administered via the buccal mucosa (an advantage in children) and can also be given as a suppository.
Benzodiazepines are undoubtedly potent anti-convulsants on acute administration but their use in long-term treatment of epilepsy is limited by the development of tolerance to the anti-convulsant effects and by side-effects such as sedation and psychomotor slowing,39They are usually considered as an adjunct to standard drugs where these have failed to give acceptable control.
Table 1: Pharmacokinetic profile of common benzodiazepines and their licensed indications
Long-acting
TMax (hrs)
T1/2 (hrs)
Licensed indications11
Chlordiazepoxide42
2
7-14
Short-term use in anxiety, adjunct to acute alcohol withdrawal
Diazepam42
0.5-2
32-47
Short term use in anxiety, adjunct to acute alcohol withdrawal, insomnia, status epilepticus, muscle spasm, peri-operative use.
Clonazepam43
2.5
23.5
all forms of epilepsy, myoclonus, status epilepticus
Intermediate-acting
Temazepam42
1
5-8
Insomnia; peri-operative use.
Nitrazepam, Flurazepam*42
1-3
16-48
Short-term use for insomnia
Loprazolam, Lormetazepam42
1-3
8-10
Short-term use for insomnia
Short-actingα
Lorazepam42
1-1.5
10-20
Short term use in anxiety or insomnia; status epilepticus; peri-operative use.
Oxazepam42
2.2-3
5-15
Short term use in anxiety
Midazolam43
0.6
2.4
Sedation with amnesia, sedation in intensive care, induction of anaesthesia.
Alprazolam42
1.2-1.7
10-12
Short term use in Anxiety
Tmax: time to peak plasma concentration
T1/2: half-life
*Nitrazepam and flurazepam have prolonged action and may give rise to residual effects on the following day. Temazepam, Loprazolam and Lormetazepam act for a shorter time and have little or no hangover effect. αShort-acting compounds preferred in hepatic impairment but carry a greater risk of withdrawal symptoms.
Other uses
Alcohol detoxification – Benzodiazepines have become the standard pharmacological treatment for alcohol withdrawal. In acute alcohol detoxification, long acting benzodiazepines, such as diazepam or chlordiazepoxide are more appropriate than shorter acting agents like lorazepam or temazepam. The two principal reasons for this are 1) former drugs provide stable plasma concentrations over several hours which is necessary to maintain control over central nervous system excitability, and 2) There is a higher risk of addiction with short-acting drugs in this patient population.
In alcohol dependent patients with hepatic impairment, oxazepam or lorazepam is more suitable as they are not eliminated by hepatic oxidation through the Cytochrome P450 system. Cytochrome p450 (CYPs) is a collective generic term use to describe a superfamily of membrane bound heme-thiolate proteins of critical importance in the oxidative and reductive metabolism of both endogenous and foreign compounds. CYPs are the major enzymes in drug metabolism accounting for 75% of the total metabolismMany of the CYPs in humans are found in the liver and the gastrointestinal tract. After the acute detoxification is over, many patients enter rehabilitation programmes aimed at maintaining abstinence in the community. There is no evidence that use of benzodiazepines is useful in reducing alcohol craving or facilitating abstinence.
Anaesthesia – The psychotropic effects of benzodiazepines make them appropriate for use as anaesthetic agents or as adjuncts to anaesthesia. Muscle relaxation, sedation and retrograde amnesia are sought after properties in anaesthetic agents. Midazolam is used as a sedative agent in patients undergoing minor invasive practices considered as traumatic, such as dental treatment or endoscopy.41
Muscle relaxants – The muscle relaxant properties of benzodiazepines are an indication for their use in some neurological disturbances for symptomatic relief of muscle spasms and spasticity.
Assessment and management of patients with chronic benzodiazepine dependence
Because of the adverse effects, lack of efficacy and socioeconomic costs of continued benzodiazepine use, long-term users have for many years been advised to withdraw if possible or at least to reduce dosage.10,44 Echoing the CSM advice, the Mental Health National Service Framework (NSF), which was published in 1999, recommended that benzodiazepines should be used for no more than two to four weeks for severe and disabling anxiety. The Mental Health NSF called upon health authorities to implement systems for monitoring and reviewing prescribing of benzodiazepines within local clinical audit programmes. Primary Care Trusts (PCTs) should ensure that this recommendation is still being implemented45.
In primary care, early detection and intervention are the main principles of assessment. The initial assessment should
· Establish the pattern of benzodiazepine usage: onset, duration, which benzodiazepine/s, dosage history, current regime and any periods of abstinence.
· Check for evidence of benzodiazepine dependence (see box 3).
· If benzodiazepine dependence is present, determine the type of benzodiazepine.
· Detail any history of previous severe withdrawal (including history of seizures).
· Establish the level of motivation to change.
Dependence on benzodiazepines often indicates psychosocial problems in a person. Benzodiazepines are increasingly used in conjunction with other substance of abuse to enhance the effects obtained from opiates, and to alleviate withdrawal symptoms of other drugs of abuse such as cocaine, amphetamines or alcohol. The patient needs to have an individualised and a comprehensive assessment of their physical and mental health needs and any co-morbid use of other drugs and alcohol. Stable psychological health and personal circumstances are desirable features for successful withdrawal from benzodiazepines. Certain patients will be unsuitable for withdrawal, e.g. those patients experiencing a current crisis or having an illness for which the drug is required at the current time. Referral to specialist teams may be appropriate for some, e.g. if the patient is also dependent on other drugs or alcohol, if there is co-existing physical or psychiatric morbidity or if there is a history of drug withdrawal seizures. In some circumstances, it may be more appropriate to wait until other problems are resolved or improved.
This list is probably not inclusive. Not all patients get all the symptoms. Different individuals get a different combination of symptoms.
Management of benzodiazepine withdrawal
Withdrawal of the benzodiazepine drug can be managed in primary care if the patients in consideration are willing, committed and compliant. Clinicians should seek opportunities to explore the possibilities of benzodiazepine withdrawal with patients on long-term prescriptions. Interested patients could benefit from a separate appointment to discuss the risks and benefits of short and long term benzodiazepine treatment47. Information about benzodiazepines and withdrawal schedules could be offered in printed form. One simple intervention that has been shown to be effective in reducing benzodiazepine use in long-term users is the sending of a GP letter to targeted patients. The letter discussed the problems associated with long-term benzodiazepine use and invited patients to try and reduce their use and eventually stopAdequate social support, being able to attend regular reviews and no previous history of complicated drug withdrawal is desirable for successful benzodiazepine withdrawal.
Switching to diazepam: This is recommended for some people commencing a withdrawal schedule. Diazepam is preferred because it possesses a long half-life, thus avoiding sharp fluctuations in plasma level. It is also available in variable strengths and formulations. This facilitates stepwise dose substitution from other benzodiazepines and allows for small incremental reductions in dosage. The National Health Service Clinical Knowledge Summaries recommend switching to diazepam for people using short acting benzodiazepines such as alprazolam and lorazepam, for preparations that do not allow for small reductions in dose (that is alprazolam, flurazepam, loprazolam and lormetazepam) and for some complex patients who may experience difficulty withdrawing directly from temazepam and nitrazepam due to a high degree of dependencySee table 2 for approximate dose conversions of benzodiazepines when switching to diazepam.
Gradual Dosage Reduction:It is generally recommended that the dosage should be tapered gradually in long-term benzodiazepine users such as a 5-10% reduction every 1-2 weeks1,49.Abrupt withdrawal, especially from high doses, can precipitate convulsions, acute psychotic or confusional states and panic reactions. As mentioned earlier, benzodiazepines’ enhancement of GABA’s inhibitory activity reduces the brain’s output of excitatory neurotransmitter such as norepinephrine, serotonin, dopamine and acetylcholine. The abrupt withdrawal of benzodiazepines may be accompanied by uncontrolled release of dopamine, serotonin and other neurotransmitters which are linked to hallucinatory experiences similar to those in psychotic disorders46.
The rate of withdrawal should be tailored to the patient's individual needs and should take into account such factors as lifestyle, personality, environmental stressors, reasons for taking benzodiazepines and the amount of support available. Various authors suggest optimal times of between 6-8 weeks to a few months for the duration of withdrawal, but some patients may take a year or more,50A personalised approach, empowering the patient by letting them guide their own reduction rate is likely to result in better outcomes.
Table 2: Approximate equivalent doses of benzodiazepines1
Benzodiazepine
Approximate equivalent dosage (mg)a
Alprazolam
0.5
Chlordiazepoxide
25
Clonazepam
0.5
Diazepam
10
Flunitrazepam
1
Flurazepam
15-30
Loprazolam
1
Lorazepam
1
Lormetazepam
1
Nitrazepam
10
Oxazepam
20
Temazepam
20
aClinical potency for hypnotic or anxiolytic effects may vary between individuals; equivalent doses are approximate.
Patients may develop numerous symptoms of anxiety despite careful dose reductions. Simple reassurance and encouragement should suffice in most cases however, in a minority who are experiencing significant distress, formal psychological support should be available. Cognitive therapy, behavioural approaches including relaxation techniques and breathing exercises for anxiety management as well as other therapies such as massage and yoga may alleviate difficulties during withdrawal. Psychoeducation around withdrawal symptoms should be offered and a referral to a support organisation or group is helpful.
Although prescriptions of benzodiazepines have declined substantially since 1988, there is an ongoing challenge within all sectors of the NHS to prevent benzodiazepine dependence. This can be achieved by adhering to official recommendations to limit prescriptions to 2-4 weeks, or for brief courses or occasional usage. All health authorities should have clinical audit programmes reviewing and monitoring prescribing rates for benzodiazepines. Through this, increased awareness of CSM guidelines amongst all health care professionals should aid in more appropriate prescriptions and subsequent monitoring that is required to prevent unnecessary prescriptions. Patients on long-term prescriptions should be offered the opportunity for controlled withdrawal and the relevant psychological and social support.
Hepatitis B (HB) is a major disease and is a serious global public health problem. About 2 billion people (latest figures so far by WHO) are infected with the hepatitis B virus (HBV) all over the world. Interestingly, rates of new infection and acute disease are highest among adults, but chronic infection is more likely to occur in persons infected as infants or young children, which leads to cirrhosis and hepatocellular carcinoma in later life. More than 350 million persons are reported to have chronic infection globally at present1,2. These chronically infected people are at high risk of death from cirrhosis and liver cancer. This virus kills about 1 million persons each year. For a newborn infant whose mother is positive for both HB surface antigen (HBsAg) and HB e antigen (HBeAg), the risk of chronic HB Virus (HBV) infection is 70% - 90% by the age of 6 months in the absence of post-exposure immunoprophylaxis3.
HB vaccination is the only effective measure to prevent HBV infection and its consequences. Since its introduction in 1982, recommendations for HB vaccination have evolved into a comprehensive strategy to eliminate HBV transmission globally4. In the United States during 1990–2004, the overall incidence of reported acute HB declined by 75%, from 8.5 to 2.1 per 100,000 population. The most dramatic decline occurred in children and adolescents. Incidence among children aged <12 years and adolescents aged 12-19 years declined by 94% from 1.1 to 0.36 and 6.1 to 2.8 per 100,000 population, respectively2,5.
Population of countries with intermediate and high endemicity rates are at high risk of acquiring HB infection. Pakistan lies in an intermediate endemic region with a prevalence of 3-4% in the general population6. WHO has included the HB vaccine in the Expanded Programme on Immunisation (EPI) globally since 1997. Pakistan included the HB vaccination in the EPI in 2004. Primary vaccination consists of 3 intramuscular doses of the HB vaccine. Studies show seroprotection rates of 95% with standard immunisation schedule at 0, 1 and 6 months using a single antigen HB vaccine among infants and children7,8. Almost similar results have been reported with immunisation schedules giving HB injections (either single antigen or in combination vaccines) at 6, 10 and 14 weeks along with other vaccines in the EPI schedule. But various factors like age, gender, genetic and socioenvironmetal influences, are likely to affect seroprotection rates9.So there is need to know actual seroprotection rates in our population where different vaccines, EPI procured and privately procured incorporated in different schedules are used. This study has been conducted to know the real status of seroprotection against HB in our children. Results will help in future policy-making, highlighting our shortcomings, comparing our programme with international standards and moreover augment future confidence in vaccination programmes.
Materials And Methods
This study was conducted at vaccinations centres and paediatrics OPDs (Outpatient Departments) of CMH and MH, Rawalpindi, Pakistan. Children reporting for measles vaccination at vaccination centres at 9 months of age were included. Their vaccination cards were examined and ensured that they had received 3 doses of HB vaccine according to the EPI schedule, duly endorsed in their cards. They included mainly children of soldiers but some civilians also who were invited for EPI vaccination at the MH vaccination centre. Children of officers were similarly included from the CMH vaccination centre and vaccination record was ensured by examining their vaccination cards. Some civilians who received private HB vaccination were included from paediatric OPDs . Some children beyond 9 months and less than 2 years of age who reported for non-febrile minor illnesses in the paediatric OPD at CMH and MH, were also included and their vaccination status was confirmed by examining their vaccination cards.
Inclusion Criteria
1) Male and female children >9 months and <2 years of age.
2) Children who had received 3 doses of HBV according to the EPI schedule at 6,10 and 14 weeks.
3) Children who had a complete record of vaccination- duly endorsed in vaccination cards.
4) Childen who did not have history of any chronic illness.
Exclusion Criteria
1) Children who did not have proper vaccination records endorsed in their vaccination cards.
2) Interval between last dose of HBV and sampling was <1 month.
3) Children suffering from acute illness at time of sampling.
4) Children suffering from chronic illness or on immunosuppressive drugs.
Informed consent for blood sample collection was obtained from the parents or guardians. The study and the informed consent form was approved by the institutional ethical review board. Participants were informed about results of HBs antibody screening. After proper antiseptic measures, blood samples (3.5 ml) were obtained by venepuncture. Autodisabled syringes were used. Collected blood samples were taken in vaccutainers and labelled by identification number and name of child. Samples were immediately transported to the Biochemistery Department of Army Medical College. Samples were kept upright for half an hour and then centrifuged for 10 minutes. Supernatant serum was separated and stored at -20 0C in 1.5 ml eppendorf tubes till the test was performed. Samples were tested using ELISA (DiaSorin S.p.A Italy kit) for detection of anti-HBs antibodies according to manufacturers’ instructions. The diagnostic specificity of this kit is 98.21% (95% confidence interval 97.07-99.00%) and diagnostic sensitivity is 99.11% (95% confidence interval 98.18-99.64%) as claimed by the manufacturer. Anti-HBs antibody enumeration was done after all 3 doses of vaccination (at least 1 month after the last dose was received).
As per WHO standards, anti-HBs antibody titres of >10 IU/L is taken as protective and samples showing antibody titres <10 IU/L were considered as non-protected. Samples having antibody titres >10 IU/L were taken as seroprotected against HB infection. All relevant information was entered in a predesigned data sheet and used accordingly at the time of analysis. Items entered included age, gender, place of vaccination, type of vaccination (private or government procured), number of doses and entitlement status (dependent of military personnel or civilian). The study was conducted from 1st January 2010 to 31st Dec 2010.
Statistical Analysis
Data was analysed using SPSS version 15. Descriptive statistics were used to describe the data, i.e. mean and standard deviation (SD) for quantitative variables, while frequency and percentages were used for qualitative. Quantitative variables were compared through independent samples’ t-test and qualitative variables were compared through the chi-square test between both the groups. A P-value <0.05 was considered as significant.
The mean age of the children was 13.7 months. The overall frequency of children with titres <10 IU/L was 61 (31.4%) while frequency of children with titres >10 IU/L was 133 (68.6%).
Geometric mean titres (GMT) were 85.81 for the seroprotected (>10 IU/L) category.
Results
One hundred and ninety-four children, who had received HB vaccination according to EPI schedule, were tested for anti-HBs titres. Out of them 61 (31.4%) had anti-HBs titres less than 10 IU/L (non-protective level) while 133 (68.6%) had anti-HBs titres above 10 IU/L (protective level) as shown in Figure 1. The GMT of anti-HBs among the individuals having protective levels (> 10 IU/L) was found to be 85.81 IU/L.
Figure 1
Figure 2
Figure 2 shows that anti-HBs titres between 10–100 IU/L was found in 75 (50.4%) children. Twenty-six (19.5%) individuals had titres between 100–200 IU/L. Twenty (14%) children had titres between 20–500 IU/L, 10 (7%) children had titres between 500–1000 IU/L and only 2 (1.5%) children had anti-HBs titres > 1000 IU/L.
One hundred and eighty-four children received vaccination supplied by government sources (Quinevaxem by Novartis) out of which 61 (33.1%) children had anti-HBs titres <10 IU/L (non- protective) and 123 (66.9%) had anti-HBs titres >10 IU/L (protective level). Only 10 children had received vaccination obtained from a private source (Infanrix Hexa by GSK), out of which all 10 (100%) had anti-HBs titres >10 IU/L (protective level). Comparison between the two groups revealed the difference to be significant (P value= 0.028).
One hundred and thirty-two children received vaccination from army health facilities (CMH and MH) out of which 36 (27.3%) had anti-HBs titres < 10 IU/L while 96 (72.7%) had anti-HBs titres >10 IU/L. Sixty-two children were vaccinated at civilian health facilities (health centres or vaccination teams visiting homes). Out of them 25 (40.3%) had anti-HBs titres <10 IU/L while 37 (59.7%) had anti- HBs titres >10 IU/L. The difference was insignificant (P value= 0.068). Gender analysis revealed that in the study group 129 (68.5%) were male children. Out of them 34 (26.4%) had anti-HBs titres <10 IU/L and 95 (73.6%) had anti-HBs titres >10 IU/L. Sixty-five (31.5%) were female children and out of them 27 (41.5%) had anti-HBs titres <10 IU/L while 38 (58.5%) had anti-HBs titres > 10 IU/L. Statistical analysis revealed the difference between males and females was significant (P value= 0.032).
One hundred and twenty-two (62.9%) children were less than 1 year of age. Out of them 37 (30.3%) had anti-HBs titres <10 IU/L and 85 (69.7%) had anti- HBs titres >10 IU/L. Seventy-two (37.1%) children ranged between 1 to 2 years of age. Out of them 24 (33.3%) had anti-HBs titres <10 IU/L while 48 (66.7%) had anti-HBs titres >10 IU/L. On comparison the difference between the two groups was insignificant (P value= 0.663), as shown in Table 1.
Patient characteristics
Anti-HBs titres (< 10 IU/L) (n = 61)
Anti-HBs titres (> 10 IU/L) (n = 133)
P – values
Age groups
0.63 NS
< 1year (n = 122)
37 (30.0%)
85 (69.7%)
> 1year (n = 72)
24 (33.3%)
48 (66.7%)
Gender
0.032
Male (n = 129)
34 (26.4%)
95 (73.6%)
Female (n = 65)
27 (41.5%)
38 (58.5%)
Hospital
0.068 NS
Army (n = 132)
36 (27.3%)
96 (72.7%)
Civilian (n = 62)
25 (40.3%)
37 (59.7%)
Vaccine Type
0.028
Government (n = 184)
61 (33.2%)
123 (66.8%)
Private ( n = 10)
0 (0%)
10 (100%)
Table 1 (NS = Insignificant; * = Significant )
Discussion
HB is a global health problem with variable prevalence in different parts of the world1. Various studies carried out in different parts of Pakistan in different groups of population have shown diverse figures regarding prevalence of HB. However, a figure of 3-4% is accepted as general consensus by and large, thus making Pakistan an area of intermediate endemicity for HB6. Yet when we extrapolate these figures to our population, it is estimated that Pakistan hosts about seven million carriers of HB which is about 5% of the worldwide 350 million carriers of HB10,11.
Age at the time of infection plays the most important role in acquisition of acute or chronic HBV disease. HBV infection acquired in infancy is responsible for a very high risk of chronic liver disease due to HBV in later life12. HB is a preventable disease and fortunately vaccination at birth and during infancy can eradicate the disease globally, if vaccination strategy is effectively implemented13. This can be claimed as the first anti-cancer vaccine which prevents hepatocellular carcinoma in later life.
In Pakistan, the HB vaccine was included in the EPI in 2004, given along with DPT (Diphtheria, Pertussis, Tetanus) at 6, 10 and 14 weeks of age. The vaccine is provided through government health infrastructure to health facilities. Private HB vaccines supplied as a single antigen or in combination vaccines are also available in the market. The efficacy of these recombinant vaccines is claimed to be more than 95% among children and 90% among normal healthy adults14. The immunity of the HB vaccination is directly measured by development of anti-HBs antibodies more than 10 IU/L, which is considered as a protective level15. However, it is estimated that 5–15 % of vaccine recipients may not develop this protective level and remain non-responders due to undermentioned reasons.16 Published studies regarding antibody development in relation to various factors in terms of immunogenicity and seroprotection, show highly varied results. Multiple factors like dose, dosing schedules, sex, storage, site and route of administration, obesity, genetic factors, diabetes mellitus and immunosupression, affect HB antibodies development response17.
Although the HB vaccine was included in the EPI in 2004 in Pakistan, until now no published data showing seroconversion and seroprotection among vaccine recipients of this programme is available on a national level to our knowledge. Our study has revealed that out of 194 children, only 133 (68.6%) had anti-HBs titres in the protective range (>10 IU/L) while 61 (31.4%) did not develop seroprotection. These results are low as compared to other international studies. A study from Bangladesh among EPI vaccinated children shows a seroprotection rate of 92.2%13 while studies from Brazil18 and South Africa19 have separately reported seroprotection rates of 90.0% and 86.6%, respectively. Studies from Pakistan carried out in adults also show seroprotection rates (anti-HBe titres >10 IU/L) of more than 95% in Karachi University students14 and 86% in health care workers of Agha Khan University Hospital20, respectively. However, in these studies the dosing schedule was 0, 1 and 6 months, and participants were adults. These results are consistent with international reports.
The gravity of low seroprotection after HB vaccination is further aggravated when we extrapolate these figures to our overall low vaccination coverage rates of 37.6% to 45% as shown in studies at Peshawar and Karachi respectively21,22. One can imagine a significantly high percentage of individuals vulnerable to HBV infection even after receiving HB vaccine in an extensive national EPI programme. Therefore, a large population still remains exposed to risk of HBV infection, and national and global eradication of HBV infection will remain a dream. Failure of seroprotection after receiving the HBV vaccination in the EPI will also be responsible for projecting a sense of false protection among vaccine recipients.
Dosing schedule is an important factor in the development of an antibody response and titre levels. According to the Advisory Committee on Immunization Practices (ACIP) of America, there should be a minimum gap of 8 weeks between the second and third doses and at least 16 weeks between the first and third doses of the HB vaccination23. To minimize frequent visits and improve compliance, the dosing schedule has been negotiated in the EPI to 6, 10 and 14 weeks24. Although some studies have shown this schedule to be effective, the GMT of anti-HBs antibodies achieved was lower than that achieved by the standard WHO schedule25. This may be one explanation of lower rates of seroprotection in our study. The GMT achieved in our study among the children having protective levels of antibodies is 85.81 IU/L which is lower than most other studies. This supports the observation that GMT achieved in this schedule is lower than that produced by the standard WHO schedule. This may result in breakthrough infection of HB in vaccinated individuals in later life due to waning immunity. However, the immune memory hypothesis supports protection of vaccinated individuals in later life in spite of low anti-HBs antibody titres26. Yet further studies are required to dispel this risk.
Another shortcoming of this schedule is to miss the dose at birth (‘0 dose’). It has been reported that the 0 dose of the HB vaccine alone is 70% - 95% effective as post-exposure prophylaxis in preventing perinatal HBV transmission without giving HB immunoglobulins27. This may also be a factor contributing to lower rates of seroprotection in our study as we have not done HBsAg and other relevant tests to rule out HBV infection in these children. Moreover pregnant ladies by and large are not screened for HBV infection in Pakistan routinely in the public sector except in a few big cities like Islamabad, Lahore or Krachi. Therefore, we do not know the HB status of pregnant mothers and the risk of transmission to babies remains high. Different studies have reported much varied figures of HB status in pregnant ladies. A study from Karachi reports 1.57% pregnant ladies are positive for HBsAg while a study from Rahim Yar Khan reports this figure to be up to 20%28,29. A study by Waheed et al regarding the transmission of HBV infection from mother to infants reports the risk to be up to 90%30. All of these studies support the importance of the birth dose of the HB vaccination and augment the fact that control and eradication of HB with the present EPI schedule is not possible. Jain from India has reported a study using an alternative schedule of 0, 6 weeks and 9 months. He has reported it to be comparable to the standard WHO schedule of 0, 1, 6 months in regards to seroprotection and GMT levels achieved31. This schedule can be synchronised with the EPI schedule, avoiding extra visits and incorporating the birth dose. A similar schedule can also be incorporated in our national EPI.
In our study, seroprotection rates were found to be low in the female gender and the difference was significant. This finding differs with other studies which report lower seroprotection rates in males32. Although the number of female children was less, there is no plausible explanation for this observation. The site of inoculation of the HB vaccine is also very important for an adequate immune response. Vaccines given in the buttocks or intradermally produce lower antibody titres than intramuscular injections given in the outer aspect of the thigh in children, due to poor distribution and absorption of the vaccine within the host body. The practice of giving vaccinations in the buttocks by vaccinators is a common observation which they feel convenient for intramuscular injection in children. This may also be one reason for low seroprotection rates in our study, as we picked the children at random who had received vaccination at public health facilities except a small number of private cases.
The effectiveness of the vaccine also depends on the source of procurement and proper maintenance of the cold chain. In this study 100% seroprotection was observed in children who received the HB vaccine procured from a private source. Although the number of private cases was less, this factor of source and the cold chain also needs attention. To address this issue proper training of EPI teams regarding maintenance of temperature, injection techniques, motivation and monitoring can improve outcomes substantially.
The findings of this study are different from published literature because this is a cross-sectional observational study. This reports the actual seroprotection rates after receiving the HB vaccination in the EPI schedule. While most other studies show the results after ensuring control of influencing factors such as type of vaccine, dose, schedule, route of administration, training and monitoring of local EPI teams and health status of vaccine recipients, etc. Therefore, this is an effort to look at a practical scenario and evaluate outcomes which can help in framing future guidelines to achieve the goal of control and eradication of HB infection. Further studies are required at a large scale to determine the effect of HB vaccination at a national level.
Conclusion
The HB vaccination programme has decreased the global burden of HBV infection, but evidence of decreased burden is not uniform amongst world population.Of course figures witness marked decrease in developed world while in developing world statistics show little change. Unfortunately, implementation of this programme is not uniformly effective in all countries, thus resvoirs of infection and the source of continued HBV transmission persists. HBV infection is moderately endemic in Pakistan. The HB vaccine has been included in the national EPI since 2004. The present study shows seroprotection rates of only 68.6% in vaccine recipients, which is low when compared with other studies; 31.4% of vaccine recipients remain unprotected even after vaccination. Moreover GMT achieved in seroprotected vaccine recipients is also low (85.81 IU/L). There can be multiple reasons for these results, such as type of vaccine used, maintenance of the cold chain, route and site of administration, training and monitoring of EPI teams and dosing schedule. In present practice, the very important birth dose is also missing. These observations warrant review of the situation and appropriate measures to be taken to rectify the above mentioned factors, so that desired seroprotection rates after HB vaccination in the EPI can be achieved among vaccine recipients.
An 86-year-old lady was admitted from her residential home with acute on chronic confusion, new symptoms of expressive and receptive dysphasia, dysphagia, vacant episodes and urinary incontinence. She had a previous significant history of haemorrhagic stroke with residual right sided weakness, atrial fibrillation, hypertension, and moderate dementia. Following a CT head, this lady was started on acyclovir for encephalitis. She failed to respond to treatment, and developed constipation. With careful consideration of her poor prognosis and quality of life, this lady was placed on the End of Life Pathway. She was catheterised for comfort. Nine days after initial insertion of the urinary catheter, purple urine was noted in the catheter bag with yellow urine in the tubing leading to the bag. Urine dipstick showed Blood ++, Protein ++, Leuc +, Nit –ve, Glu -ve, Ketone +, pH 8.0. Urine microscopy showed: WCC 454, RBC 279, epithelial cells 52, no casts. Urine culture revealed heavy mixed growth with multiple organisms.
Question: What is the diagnosis?
Answers:
Porphyria
Propofol infusion syndrome
Purple urine bag syndrome
Blue diaper syndrome
Differential diagnoses: Discoloration of urine can be caused by trauma if blood stained, urinary tract infections, ingestion of dye (methylene blue), medications (amitriptyline, indomethacin, triamterene, flutamide, and phenol).
Explanation:
Porphyria usually presents with severe pain with neuropsychological symptoms or photosensitivity, and urine discoloration is likely to occur from initial onset of disease.
Propofol is an anaesthetic agent, excreted in the urine as phenol derivatives which can cause a green urine discolouration1. This medication is unlicensed for End of Life Pathway. Propofol infusion syndrome is associated with prolonged high dose infusion, but is not always accompanied by urine discoloration.
Blue diaper syndrome is an inherited metabolic disorder of tryptophan with presentation at infancy2-3.
Correct answer
Purple urine bag syndrome (PUBS)
Purple urine bag syndrome (PUBS)
PUBS is an uncommon condition with purple discoloration of the urine catheter system. This phenomenon is due to the presence of indigo and indirubin in the collected urine. PUBS was first published in 19784. Some academics would argue that PUBS was reported even earlier historically as an observation in Sir Henry Halford's bulletin in 18115-6. Two recent literature reviews suggested the prevalence of PUBS is as high as 9.8% in institutionalized patients with long-term urinary catheterisation8-9, 12.
A triad of key factors are suggested as cause of PUBS:
high level of tryptophan in the gut due to diet intake or bowel stasis
long term catheterisation8
urinary tract infection (UTI) with bacteria possessing indoxyl phosphatase and sulphatase enzymes, commonly Providencia stuarttiand rettgeri, Pseudomonas auruginosa, Proteus mirabilis, Escherichia coli,Klebsiella pneumoniae, Morganella, Citrobacter species, Group BStreptococci and Enterococci8, 13.
It is understood that bowel stasis causes accumulation of tryptophan, which leads to an increase in urinary indoxyl sulphate (UIS). In the presence of indoxyl phosphatase and sulphatase enzyme activities, whilst collected in the catheter system, UIS is degraded to form a mixture of indigo and dissolved indirubin in the plastic11, coating the catheter system with a purple appearance. Intensity of discoloration is deeper the longer the urine is in contact with the catheter plastic7, 10-12. The urine does not appear purple prior to entering the catheter.
Recent literature7-8 also suggested female gender, alkaline urine, bed bound debilitated patient population, PVC material7 and institutionalization are further predisposing factors of PUBS.
Management of PUBS requires catheter change and treatment of underlying UTI.
Good catheter hygiene and shorter duration of catheterisation can reduce PUBS1.
A colonic diverticulum is defined as a sac-like protrusion of mucosa through the muscular component of the colonic wall1. The terms “diverticulosis” and “diverticular disease” are used to express the presence of diverticula without associated inflammation. While the term “diverticulitis” indicates there is inflammation of a diverticulum or diverticula, which is commonly accompanied by either microscopic or macroscopic perforation2.
In the developed world, diverticular disease of the colon is widespread and in those aged over 65 years of age it is present in greater than 65%3. The incidence increases dramatically with time and while only 5% of the western population are affected in the fifth decade this rises steeply to over 50% by the eight decade and 60% in the ninth 4.
Although diverticulosis is extremely common, complications requiring surgery only occur in 1% of patients overall 5 and 10% of those admitted to hospital as an emergency for treatment6. Despite this, there is a substantial healthcare burden inflicted by diverticular disease and within the United States alone it accounts for 312,000 hospital admissions, 1.5 million days of inpatient treatment and a total estimated cost of 2.6 billion dollars per annum 7.
The aetiology of the diverticulosis is poorly understood but it is probably a multi-factorial process involving dietary habits (specifically low fibre intake) as well as changes in colonic pressure, motility and wall structure that are associated with ageing8. The pathogenesis of diverticulitis is also uncertain, however stasis or obstruction in a narrow necked diverticulum leading to overgrowth of pathogens and local tissue ischemia is thought likely 2.
This review will discuss the common presentations, investigations and current treatment strategies utilised in the management of acute diverticulitis and its complications as well as providing an up to date synopsis of existing recommendations for follow up and prevention.
Symptoms and Signs
In Western nations, diverticula are most commonly situated in the left colon9 and 99% of patients will have some element of sigmoid involvement10. Therefore patients commonly present with sigmoid diverticulitis that typically displays features of left iliac fossa pain and fever with raised inflammatory markers (see below). Physical exam will disclose left lower quadrant peritonism for simple disease, but in complicated cases physical examination findings may reveal a palpable abdominal mass, evidence of fistulas or obstruction, or widespread peritonitis11.
In cases of complicated diverticulosis, a stricture may lead to obstructive symptoms with complaints of nausea, vomiting and distension being present. If a fistula has developed, a history of recurrent urinary tract infection, pneumaturia and faecaluria may also be elicited12. In a female with a previous history of hysterectomy suspicion will be further raised as colovesical and colovaginal fistulas are rare in females with their uterus in place. If a patient reports passing stools per vagina, insertion of a vaginal speculum and inspection may confirm this latter diagnosis12.
Differential diagnosis
The differential diagnosis for diverticulitis and its complications is extensive and includes irritable bowel syndrome, inflammatory bowel disease, ischaemic or infective colitis, pelvic inflammatory disease and malignancy. It is obviously most imperative to exclude the latter differential 4, particularly in the case of a stricture that is impassable on colonoscopy, as many of these specimens following resection (32% in one series13) will transpire to be adenocarcinoma4. It should also be noted that sigmoid diverticulitis may also masquerade as acute appendicitis if the colon is long and redundant or otherwise situated within the abdomen or pelvis such that the inflamed segment lies in the suprapubic region, right iliac fossa or McBurney’s point2.
Complications
Although diverticulosis is present in nearly two thirds of the elderly population, the vast majority of patients will remain entirely asymptomatic. Even so, an estimated 20% of those affected will manifest symptomatology, mainly as diverticulitis, but potentially with further complications of perforation, abscesses, fistulas, and obstruction, as well as bleeding per rectum6.
The European Association for Endoscopic Surgeons (EAES) developed a classification scheme based upon the severity of diverticulitis, which broadly classifies patients into either simple symptomatic or complicated disease (Table 1)14. Where an abscess or perforation develops the Hinchey classification is used as a staging tool and can provide prognostic information on the likely outcome (Table 2)15.
Table 1 - European Association for Endoscopic Surgeons classification system for diverticulitis 14
Grade of disease
Clinical explanation of grade
Clinical state of the patient
I
Symptomatic uncomplicated disease
Pyrexia, abdominal pain, CT findings consistent with diverticulitis
II
Recurrent symptomatic disease
Recurrence of Grade I
III
Complicated disease
Bleeding, abscess formation, phlegmon, colonic perforation, purulent and faecal peritonitis, stricturing, fistula and obstruction
Table 2 – Hinchey classification of perforated diverticulitis15
Hinchey stage
Features of disease
Risk of death71
Stage I*
Diverticulitis with a pericolic abscess
5%
Stage II**
Diverticulitis with a distant abscess (this may be retroperitoneal or pelvic)
5%
Stage III
Purulent peritonitis
13%
Stage IV
Faecal peritonitis
43%
* Stage I has been divided into Ia Phlegmon and Ib confined pericolic abscess in later modifications38 72 ** Stage II has been divided into IIa abscesses amenable to percutaneous drainage and IIb complex abscess with or without fistula in later modifications14 73
Perforation is probably the most feared complication and the annual prevalence of perforated diverticulitis within a northern European population is currently thought to stand at 3.8 per 100,000 of the population, which is a figure that is increasing16. Despite this only 1-2% of patients who attend for urgent assessment and treatment will have a gross perforation2 but for 80% this will be their first presentation so a high index of suspicion is still required17.
Blood investigations
In clinical practice, inflammatory markers, commonly the White Blood Cell (WBC) count and C-Reactive Protein (CRP) level, are frequently employed to assist in diagnosing diverticulitis and its complications. In a recent retrospective study, a White Blood Cell (WBC) count >10,000/μL was present in 62% of patients with Computed Tomography (CT) confirmed diverticulitis and the presence of leukocytosis was significantly more common in patients with diverticulitis and associated perforation than without (86% v 65%, p=0.01)18.
CRP has also been shown to be of considerable benefit in the diagnosis of acute left sided colonic diverticulitis 19. A recently established diagnostic nomogram with a reported accuracy of 86% that was developed to improve the clinical diagnosis of diverticulitis includes an elevated CRP >50mg/l as well other variables including age, previous episodes, aggravation of pain on movement, absence of vomiting and localization of symptoms and tenderness in the left iliac fossa19.
In addition, it has been demonstrated that in acute sigmoid diverticulitis a CRP below 50mg/l is unlikely to correlate with an associated perforation (negative predictive value 79%) while a CRP above 200mg/l is an indicator that the patient may have a perforation (positive predictive value 69%)20. In this latter study, CRP also had the highest diagnostic accuracy in diagnosing perforation in acute sigmoid diverticulitis across a range of parameters assessed that included WBC count as well as less commonly used tests like bilirubin and alkaline phosphatase20.
Imaging investigations
In the acute phase of diverticulitis the extent of the extramural component of inflammation is more important than the degree of the intramural inflammation and as such CT associated with the use of intravenous and oral contrast and, in ideal conditions, rectal contrast is the gold standard means of investigation21.
CT can accurately identify extra-luminal complications such as an abscess, phlegmon, adjacent organ involvement, or fistula, as well as recognising other alternative diagnoses such as appendicitis, pelvic inflammatory disease, tubo-ovarian abscess or inflammatory bowel disease22.
The two most frequent signs of diverticulitis on CT are bowel wall thickening (96%) and fat stranding (95%) (Figure 1) with less common but highly specific signs including fascial thickening (50%), free fluid (45%), and the presence of inflamed diverticula (43%) 23. Specifically, abscess formation (Figure 2a and b) and extracolonic air or contrast (Figure 3a and b) are findings that are known to predict severity as summarised in the CT classification system developed by Ambrosetti et al24.
Figure 1 - Sigmoid diverticulitis: sigmoid colon with multiple diverticula, significant mural thickening (arrow) and pericolic fat stranding (circles)
Figure 2b - Sigmoid diverticulitis with abscess formation: sigmoid colon displaying mural thickening, diverticulosis and pericolic fat stranding (arrow). Adjacent low attenuation, septated collection (circle) representing abscess formation, with adhesion noted to adjacent small bowel loops.
Figure 3a - Perforated sigmoid diverticulitis: sigmoid colon displaying diverticulosis and mural thickening (arrow) with adjacent collection of intra-abdominal free air and adjacent inflammatory fat stranding (circle), representing active diverticulitis with perforation.
Figure 3b - Perforated sigmoid diverticulitis: sigmoid colon displaying diverticulosis, mural thickening and pericolic inflammatory fat stranding (arrow) with adjacent collection of intra-abdominal free air and adjacent inflammatory fat stranding (circle), again representative of active diverticulitis with perforation.
However despite CT having a reported sensitivity of 97%, specificity of 98%, and global accuracy of 98%25, a misdiagnosis of diverticulitis in cancer patients is relatively common and occurs in 5% of cases21. Therefore investigation of the colonic lumen by endoscopic means or barium enema after the acute attack is mandatory4 but avoided in the initial stages for fear of perforation and exacerbation of the disease2.
In expert hands ultrasound is the next best alternative investigation with a reported sensitivity of 94%26. It has been supported by a recent systematic review27 as well as current practice guidance4 and in critically ill patients it avoids the use of intravenous and intra-luminal contrast21. However it is rarely used in practice as it is operator dependent21 and for it to be accurately utilised it requires a highly skilled/trained individual to be available at all times28.
The other practical alternative to CT is a hydro-soluble contrast enema, however this investigation is significantly inferior both in terms of sensitivity (98 v 92%, p<0.01) and evaluation of the severity of inflammation (26 v 9%, p<0.02)29. While Magnetic resonance imaging (MRI) has a good sensitivity of 94% and a specificity of 87%30, in the acute setting it may be impractical both in terms of examination time and patient co-operation21. Finally, laparoscopy can also be helpful for diagnostic purposes but again in practical terms, with the increasing availability of cross-sectional imaging, it is rarely required for this purpose4.
Outpatient treatment
Evidence for successful and economical outpatient treatment of uncomplicated diverticulitis is beginning to emerge. In a prospective study of 70 patients classified on the basis of an ultrasound examination as having mild-to-moderate acute colonic diverticulitis (as defined by either limited inflammation within a diverticulum extending up to an abscess < 2 cm in diameter), 68 patients were successfully treated with oral antibiotics with an initial liquid diet and this led to a cost saving on inpatient treatment of 80%31.
In a further retrospective analysis, among a cohort of patients who were referred for outpatient treatment it was found that such treatment was effective for 94% of patients, with women and those with free fluid on CT scan appearing to be at higher risk for treatment failure32.
In reality the prospect of outpatient treatment in uncomplicated cases of acute diverticulitis is determined largely by access to the necessary investigative tools for accurate diagnosis and staging of disease, the general fitness of the patient, their ability to maintain adequate oral intake, the possibility of further outpatient review, patient compliance with medications, satisfactory social support and ability to plan for endoscopic follow up21.
In broad terms, if symptoms are not severe and the patient has no significant co-morbidities and is compliant with medical treatment, then a course of broad spectrum antibiotics can be administered orally on an outpatient basis and the patient followed up at subsequent outpatient clinics. However if the patient is systemically unwell, elderly, has significant co-morbidities or there are any other concerns it is safer to arrange for a hospital admission and treatment with intravenous antibiotics12.
Conservative inpatient treatment
Simple diverticulitis requiring hospital admission is usually treated by rehydration, symptomatic relief and intravenous antibiotics. Most patients with uncomplicated disease respond well to medical treatment and generally experience significant improvement in their abdominal pain, temperature and inflammatory markers within two days of initiation of antibiotic treatment33. If this is not the case or there is clinical concern a repeat CT is advocated and operative intervention or percutaneous drainage considered (see below)2.
It should be noted at this stage while the use of broad spectrum antibiotics in acute uncomplicated diverticulitis is supported by guidelines34 there is no actual evidence mandating the routine use of antibiotics in mild uncomplicated diverticulitis35 and in some European countries it is not routine36.
High-quality evidence regarding the most effective type of antibiotic is also lacking35. However anaerobic bacteria (usually bacteroides, clostridium, fusobacterium and peptostreptococcus) are the most commonly cultured organisms with gram-negative aerobes, especially Escherichiacoli, and facultative gram-positives, such as streptococci, often grown as well37. Therefore coverage against both Gram-negative and anaerobic bacteria is widely advocated2 21 38.
If combination antibiotics are selected, Metronidazole provides excellent anaerobic cover with less risk of clostridium difficle infection than alternatives4. However use of single agent may be more cost effective39. Local protocols are likely to influence selection but the patient may be safely switched from intravenous to oral therapy when they can tolerate a diet and oral medicines22 as intravenous antibiotics are not felt to be vastly superior40. Seven to ten days of antibiotic therapy is an acceptable treatment period22 however evidence is emerging to support shorter courses41.
Elective surgery
In a recent position statement from the Association of Coloproctology of Great Britain and Ireland (ASCPGBI) it was concluded that the majority of patients, whether young or old, presenting with acute diverticulitis could be managed with a conservative, medical approach in the longer term. Previous blanket recommendations for elective resection e.g. following two acute episodes of diverticulitis14 were challenged in this statement and it was proposed that the decision on elective resection should be made on an individual basis4. The traditional practice of waiting for a period of 4-6 weeks after a diverticulitis attack before performing an elective operation was not disputed12.
Surgery in the elective setting can be by either an open or laparoscopic technique with a recent randomised trial identifying a 27% reduction in major morbidity42 along with less pain, improved quality of life and shorter hospitalization at the cost of a longer operating time with the laparoscopic approach43. In expert centres conversion rates as low as 2.8% and median hospitals stays of 4 days can be achieved44 and individual case reports of resections using single laparoscopic port access have also emerged45. However if a laparoscopic resection is considered, it is currently recommended that patients should be treated after full recovery from the acute episode of inflammation as there is evidence to suggest lower complication and conversion rates can be achieved4.
The principles for both approaches are the same. A colorectal anastomosis is a predictor of lower recurrence rates after elective sigmoid resection for uncomplicated diverticulitis46. Therefore it is recommended that the distal resection margin is taken onto the rectum as opposed to the distal sigmoid and the splenic flexure is fully mobilised to facilitate this4, however in the case of a long redundant left colon this may not be necessary12. The proximal resection margin is less clear but should be made onto soft compliant bowel4 34. Often it is possible to identify the ureters intra-operatively however, there may be cases of complicated diverticulitis in which the extent and degree of inflammatory changes warrant the use of pre-operatively placed ureteric stents to help aid their identification and avoid injury12.
Emergency surgery for complicated diverticulitis
The indications for emergency operative intervention in acute diverticulitis include the presence of generalised peritonitis, uncontained visceral perforation, gross uncontrollable sepsis, a large undrainable or inaccessible abscess, bowel obstruction and lack of improvement or clinical deterioration with initial medical management 2.
Historically, perforated diverticulitis was treated with a three-stage procedure consisting of faecal diversion with a stoma, resection of the diseased segment of bowel, followed by takedown of the stoma and restoration of intestinal continuity. This then shifted to performing a Hartmann’s procedure which includes a primary resection of the diseased segment and end colostomy followed by subsequent colostomy reversal at a second operation11. In this case reconstruction generally involves a second laparotomy because although laparoscopic reconstruction is effective, it is infrequently performed47-48. As a result reversal is often permanently deferred.
In selected cases the ideal therapeutic option in colonic perforation is a one-stage procedure with resection followed by primary anastomosis, which adds the benefits of being a definitive treatment with the avoidance of the morbidity and mortality associated with a stoma and its reversal49. A protective ileostomy after resection and primary anastomosis is viewed as a valid additional step in patients at high risk of an anastomotic leak (immunosuppression, American Society of Anaesthesiologists (ASA) grade IV, faecal peritonitis)21 but a Hartmann’s procedure may also be selected.
Particularly in cases where there is a stricture causing obstruction and significant faecal loading, a resection in conjunction with on-table colonic lavage and primary anastomosis may be used. This technique has also been described as facilitating a primary anastomosis in the case of a perforation50. However in certain patients with obstruction depending on the viability of the proximal colon a subtotal colectomy with ileorectal anastomosis may be required12 and because small-bowel obstruction may also occur, especially in the presence of a large diverticular abscess, this may also warrant further treatment2.
The use of endoscopic colonic stenting as a treatment of acute obstruction of the large bowel secondary to colonic cancer has been well documented in the literature either as a definitive procedure or as a bridge to surgery and can effectively decompresses the obstructed colon in 90% of cases51. However the use of stents in benign disease is less well documented , with it used mainly as a bridge to surgery52 and because is associated with a higher incidence of complications in acute diverticular disease53 it cannot as yet be recommended.
Laparoscopic surgery in the emergency setting
There have been a number of recent reports of laparoscopic lavage with or without the placement of an intra-abdominal drain for patients with acute diverticulitis and perforation, with the reported advantages including the avoidance of an acute resection and the possibility of a stoma 4. The evidence that has been produced thus far to support its case is highly promising.
A recent systematic review of laparoscopic lavage for perforated colonic diverticulitis identified two prospective cohort studies, nine retrospective case series and two case reports with 231patients and the vast majority of patients (77%) had Hinchey grade III purulent peritonitis. Laparoscopic peritoneal lavage successfully controlled abdominal and systemic sepsis in 95.7% of patients, mortality was 1.7%, morbidity 10.4% and only four (1.7%) patients received a colostomy54.
In the largest series in the literature to date, Myers et al reported 100 patients with perforated diverticulitis and generalised peritonitis. Eight patients with Hinchey IV disease required conversion to an open procedure, with the overall mortality being 4% and recurrence rates only 2% over a median time period of 36 months55.
Percutaneous therapy
The appropriate management of diverticular abscesses is a matter of some debate. However according to the American Society of Colon and Rectal Surgeons (ASCRS) radiologically guided percutaneous drainage is usually the most appropriate treatment for patients with a large diverticular abscess as it avoids the need for emergency surgery and possibility of a colostomy34.
When the abscess diameter is over 5 cm, percutaneous CT guided drainage, in combination with antibiotics, is the standard treatment and offers rapid improvement in symptoms in over 90% of cases, albeit with a high recurrence rate in more severe cases38 and higher likelihood of surgery being needed in those involving the pelvis56.
In practical terms diverticular abscesses less than 3 cm in diameter usually cannot be successfully drained, as the diameter of the pigtail of most drainage catheters will be a similar dimension28. Also for smaller abscesses21, especially those less than 2cm resolution usually occurs with the use intravenous antibiotics alone34. However if a drain is sited it is advisable that before it is removed, resolution of the abscess should be confirmed and a potential bowel fistula excluded by a further contrast study28.
Finally, diverticular disease of the colon is also a relatively common cause of acute lower gastrointestinal bleeding and is in fact the diagnosis in 23% of cases57. This usually settles with conservative management but if the bleeding is profuse angiography and endovascular intervention may be helpful, with surgery very rarely required for this indication4.
Follow up
Following successful medical management of an acute episode of diverticulitis, colonoscopy, flexible sigmoidoscopy or barium enema should be performed several weeks after the resolution of symptoms to confirm the diagnosis and rule out other colonic pathology such as malignancy, inflammatory bowel disease, or ischemia22.
Following surgery there is reported to be a high incidence of the order of 25% for recurrent symptoms, which is put down to the diagnostic overlap that exists with irritable bowel syndrome58. However any suspicion of recurrent diverticulitis following surgical resection should be confirmed by CT scan after which antibiotic treatment should be initiated, as for a case of primary uncomplicated disease12. If this is excluded the high incidence (17.6%) of symptomatic anastomotic stenosis after elective laparoscopic sigmoidectomy should be borne in mind with the possibility of endoscopic dilatation considered if applicable59.
Summary points
CT scan is the gold standard means of investigation for acute diverticulitis and helps classify the stage of disease.
Evidence to support outpatient treatment of uncomplicated diverticulitis is beginning to appear, however hospital admission and treatment with broad spectrum intravenous antibiotics is often required and is highly effective.
The decision to proceed with elective surgery is judged on an individual basis and there is evidence gathering to advocate a laparoscopic approach.
In Hinchey stage III or IV disease, emergency laparotomy followed by either a Hartmann’s procedure or ideally in selected patients a resection followed by primary anastomosis may be required.
In certain cases percutaneous radiologically guided drainage of abscesses is an established alternative to open surgery with laparoscopic lavage another less invasive and highly promising option.
Lifestyle modifications and prevention
Following treatment weight loss, rationalisation of certain medications and exercise are recommended as obesity is significantly associated with an increased incidence of both diverticular bleeding and diverticulitis60, as are non-steroidal anti-inflammatory drugs and paracetamol61, with physical activity significantly associated with a reduction in the risk of complications62.
Whilst dietary fibre, particularly cellulose63, is recommended22 the evidence that supports these recommendations is not particularly strong64. However foodstuffs such as nuts, seeds, popcorn and corn that are usually discouraged have no evidence to support the theory that they may lead to increased complications65.
Small studies without control groups suggest that probiotics may have a positive effect on the recurrence of symptomatic diverticular disease66-67. Long term administration of the non-absorbable antibiotic Rifaxamin has also been used with reported success68 as has the anti-inflammatory mesalazine69. However none of these medications have a strong evidence base and as a result are not in routine use70.
The use of dietary supplements has grown rapidly over the past several decades, and are now used by more than half of the adult population in the United States (US).1 In 1994, the Dietary Supplements Health and Education Act (DSHEA) significantly changed the Food and Drug Administration’s (FDA) role in regulating supplement labelling. According to the DSHEA dietary supplements may contain products taken by mouth including vitamins, minerals, herbs or other botanicals, amino acids, other dietary substances, or combinations or extracts of any of these ‘dietary ingredients.’ The DSHEA reaffirmed that dietary supplements are to be regulated as foods and not as drugs. Annual sales of supplements to Americans are now reported at about $23 billion, a substantial share of which is spent on vitamins and minerals.
The purpose of this review is to present the discussion from available research to internists and other clinicians to help guide their decisions behind the efficacy and safety of dietary supplement use in primary prevention of chronic disease in the general non-pregnant adult population.
Profile of a dietary supplement user
In general dietary supplements are used by individuals who practise healthier lifestyles. Its use is higher among women and the children of women who use supplements; in elderly persons; among people with more education, higher income, healthier diets, and lower body mass indices; and among residents of the western US.2 Individuals with chronic illnesses, or those who are seeking to prevent recurrence of a serious disease (for example, cancer) also tend to be more frequent supplement users.3 Many dietary supplement users perceive their health as better.
Why use dietary supplements?
The growth in supplement use has accelerated rapidly with marketing spurred by claims that chronic conditions could be prevented or treated by supplement use. The commonly used over-the-counter multivitamin and mineral supplements contain at least 10 vitamins and 10 minerals. On a daily basis consumers receive advertising and promotional material of unproven claims made about dietary supplements or other products and the medical wonders they can achieve. Some of the promotional material makes a consumer feel guilty if he or she is not using one. Many users feel so strongly about the potential health benefits of some of these products that they reported that they would continue to take them even if they were shown to be ineffective in scientifically conducted clinical studies.4 More than half of American adults take dietary supplements in the belief that they will make them feel better, give them greater energy, improve their health, and prevent and treat disease.
Is there clinical evidence for use of dietary supplements?
Most studies do not provide strong evidence for beneficial health-related effects of supplements taken singly, in pairs, or in combinations of 3 or more.5 In some studies, or subgroups of the study populations, there is encouraging evidence of health benefits such as increased bone mineral density and decreased fractures in postmenopausal women who use calcium and vitamin D supplements.
Huang et al 5 performed a systematic review to synthesize the published literature on the efficacy of multivitamin and mineral supplements and certain commonly used single vitamin or mineral supplements in the primary prevention of cancer and chronic disease in the general adult population. The authors concluded that the strength of evidence for the efficacy of multivitamin/mineral supplementation in the general adult US population was very low for primary prevention of cancer, cardiovascular disease, and hypertension; and low for cataract and age-related macular degeneration.
The National Institutes of Health (NIH) consensus panel statement2 on ‘multivitamin/mineral supplements and chronic disease prevention’ did not find any strong evidence for beneficial health-related effects of supplements taken singly, in pairs, or in combinations of 3 or more. The panel concluded that the present evidence is insufficient to recommend either for or against the use of dietary supplements by the American public to prevent chronic disease. It also concluded that the current level of public assurance of the safety and quality of dietary supplements is inadequate, given the fact that manufacturers of these products are not required to report adverse events and the FDA has no regulatory authority to require labeling changes or to help inform the public of these issues and concerns.
A recent study published in Archives of Internal Medicine6 raised some disturbing concerns. In this large prospective study, 38,772 older women in the Iowa Women's Health Study were followed up for a mean time of 19.0 years. The authors found that most of the supplements studied were not associated with a reduced total mortality rate in older women. In contrast, they found that several commonly used dietary vitamin and mineral supplements, including multivitamins, vitamins B6, and folic acid, as well as the minerals iron, magnesium, zinc, and copper, were associated with a higher risk of total mortality. Of particular concern, supplemental iron was strongly and dose dependently associated with increased total mortality risk. The association was consistent across shorter intervals, strengthened with multiple use reports and with increasing age at reported use. Supplemental calcium was consistently inversely related to total mortality rate; however, no clear dose-response relationship was observed. The strengths of this study include the large sample size and longitudinal design. In addition, the use of dietary supplements was queried three times: at baseline in 1986, in 1997, and in 2004. The use of repeated measures enabled evaluation of the consistency of the findings and decreased the risk that the exposure was misclassified.
Summary
The use of dietary supplements has grown rapidly over the past several decades even though clinical deficiency of vitamins or minerals, other than iron, is now uncommon in the US.2 Fortification of foods has led to the remediation of vitamin and mineral deficits. The cumulative effects of supplementation and fortification have also raised safety concerns about exceeding upper levels besides interactions of dietary supplements with the prescriptions drugs taken by a consumer. There is no evidence-based data about what the optimal compositions and dose of a multivitamin and mineral supplement should be. Though dietary supplements are perceived to be safe, that should not be sufficient reason for using them without a valid medical need. Providers should take into consideration their efficacy and cost-effectiveness. There are also no outcomes data or data about quality adjusted life years gained by using dietary supplements taken singly, in pairs, or in combinations. The current data available on the efficacy and safety of dietary supplements is conflicting. Clinicians considering the use of dietary supplements should be aware of their risks, consider the likelihood of the adverse effects, interaction with prescription medications, safety, efficacy, costs, and possibility of unintended effectsof dietary supplements.
Conclusion
The conclusion from the available data (new and old) is that consumption of dietary supplements for prolonged periods appears not to be safe and is not cost-effective in primary prevention of chronic disease in the general non-pregnant adult US population. Practitioners should evaluate each case individually and take a decision based on available evidence-based data when considering dietary supplements in this population. Given the potential for widespread use of dietary supplements, there is a need for robust study methods in the future.
The clinical features of early HAT are well defined, yet the features of delayed HAT are less clear. Delayed HAT is a rare complication of OLT that may present with biliary sepsis or remain asymptomatic. Sonography is extremely sensitive for the detection of HAT in symptomatic patients during the immediate postoperative period. However, the sensitivity of ultrasonography diminishes as the interval between transplantation and diagnosis of HAT increases due to collateral arterial flow. MRA is a useful adjunct in patients with indeterminate ultrasound exams and in those who have renal insufficiency or an allergy to iodinated contrast.
In the absence of hepatic failure, conservative treatment appears to be effective for patients with HAT but retransplantation may be necessary as a definitive treatment.
Case Presentation:
A 52 year old male with a history of whole graft OLT for primary sclerosing cholangitis presented with two days of fever, nausea, and mild abdominal discomfort.
One week prior to presentation, he was seen in the liver clinic for regular follow-up. At that time, he was totally asymptomatic and his laboratory workup including liver function tests were within normal range.
He has undergone OLT three years prior. At the time of transplant he required transfusion of 120 units of packed red blood cells, 60 units of fresh frozen plasma and 100 units of platelets due to extensive intraoperative bleeding secondary to chronic changes of pancreatitis and severe portal hypertension, but had an otherwise uneventful postoperative recovery.
On physical examination the temperature was 39C, heart rate was 125 beats per minute, respiratory rate was 22 bpm. Initial laboratory workup revealed a white blood cell count of 25,000/mm3, AST of 6230 U/L, ALT of 2450 U/L, total bilirubin of 11 mg/dL , BUN 55 mg/dL and Creatinine of 4.5 mg/dL. Lactate level was 5 mmol/L. Doppler ultrasonography revealed an extensive intrahepatic gas (Image 1A). Computed tomography of the abdomen and pelvis revealed extensive area of hepatic necrosis with abscess formation measuring 19x14 cm with extension of gas into the peripheral portal vein branches (Image 1B,C). Upon admission to the hospital, the patient required endotracheal intubation, mechanical ventilator support and aggressively fluid resuscitation. He was started on broad-spectrum antibiotics and a percutaneous drain was placed that drained dark, foul smelling fluid. Cultures from the blood and the drain grew Clostridium perfringens.
Magnetic resonance imaging (MRI), MRA revealed occlusion of the hepatic artery 2 cm from its origin and also evidence of collaterals (Image 2A,B).
Image 1: (Pannel A) Doppler ultrasonography reveal extensive intrahepatic gas. (Pannel B&C) Computed tomography of the abdomen and pelvis reveal an extensive area of hepatic necrosis with abscess formation measuring 19x14 cm with extension of gas into the peripheral portal vein branches.
Image 2: MRI & MRA reveal occlusion of the hepatic artery 2 cm from its origin and also evidence of collaterals.
Following drain placement, the patient’s clinical condition markedly improved with significant reduction of liver function test values. Retransplantation was considered but delayed in the setting of infection and significant clinical and laboratory testing improvement.
The patient was transferred to the medical floor in stable condition, and the drain was then removed.
A week later the patient developed low grade fevers and tachycardia. One day later he began to experience mild abdominal discomfort and high grade fevers. Repeat CT of the abdomen revealed worsening hepatic necrosis and formation of new abscesses. His clinical condition decompensated quickly thereafter requiring endotracheal intubation, mechanical ventilation and aggressive resuscitation. Percutaneous drain was placed and again, drained pus-like, foul-smelling material. His overall condition deteriorated, and he eventually expired a few days later.
Discussion:
Delayed (more than 4 weeks after transplantation) HAT is a rare complication of OLT with an estimated incidence of at around 2.8%1.
Risk factors associated with development of HAT include Roux-en-Y biliary reconstruction, cold ischaemia and operative time, the use of greater than 6 units of blood, the use of greater than 15 units of plasma, and the use of aortic conduits on arterial reconstruction during transplant surgery2.
Collateralization is more likely to develop after Live Donor Liver Transplantation (LDLT) than after whole-graft cadaveric OLT3. Therefore, the latter is also associated with increased risk of late HAT.
Although the clinical features of early HAT are well described, the features of delayed HAT are less clearly defined1: the patient may present with manifestations of biliary sepsis or may remain asymptomatic for years. Right upper quadrant pain has been reported to occur in both immediate and delayed HAT. The clinical presentations may include recurrent episodes of cholangitis, cholangitis with a stricture, cholangitis and intrahepatic abscesses, and bile leaks1. Doppler ultrasonography has been extremely sensitive for the detection of HAT in symptomatic patients during the immediate postoperative period but becomes less sensitive as the interval between transplantation and diagnosis of HAT increases because of collateral arterial flow4.
3D gadolinium-enhanced MRA provides excellent visualization of arterial and venous anatomy with a fairly high technical success rate. MRA is a useful adjunct in patients with indeterminate ultrasonography examination in patients who have renal insufficiency or who have allergy to iodinated contrast 5.
Antiplatelet prophylaxis can effectively reduce the incidence of late HAT after liver transplantation, particularly in those patients at risk for this complication6. Vivarelli et al reported an overall incidence of late HAT of 1.67%, with a median time of presentation of 500 days; late HAT was reported in 0.4% of patients who were maintained on antiplatelet prophylaxis compared to 2.2% in those who did not receive prophylaxis6. The option of performing thrombolysis remains controversial. Whether thrombolysis is a definitive therapy or mainly a necessary step in the proper diagnosis of the exact etiology of HAT depends mostly on the particular liver center and needs further analysis7. Definitive endoluminal success cannot be achieved without resolving associated and possible instigating underlying arterial anatomical defects. Reestablishing flow to the graft can unmask underlying lesions as well as assess surrounding vasculature thus providing anatomical information for a more elective, better plan and definitive surgical revision7. Whether surgical revascularization compared to retransplantation is a viable option or only a bridging measure to delay the second transplantation has been a longstanding controversy in the treatment of HAT.
Biliary or vascular reconstruction do not increase graft survival and ongoing severe sepsis at the time of re-graft results in poor survival7. However, although uncommon, delayed HAT is a major indication for re-transplantation7. In the absence of hepatic failure, conservative treatment appears to be effective for patients with hepatic artery thrombosis.
C. perfringensis an anaerobic, gram-positive rod frequently isolated from the biliary tree and gastrointestinal tract. Inoculation of Clostridium spores into necrotic tissue is associated with formation of hepatic abscess8.
Necrotizing infections of the transplanted liver are rare. There have been around 20 cases of gas gangrene or necrotizing infections of the liver reported in the literature. Around 60% of these infections were caused by clostridial species with C. perfringens accounting for most of them. Around 80% of patients infected with Clostridium died, frequently within hours of becoming ill9,10. Those who survived underwent prompt retransplantation and the infection had not resulted in shock or other systemic changes that significantly decreased the likelihood of successful retransplantation8.
Because the liver has contact with the gastrointestinal tract via the portal venous system, intestinal tract bacteria may enter the liver via translocation across the intestinal mucosa into the portal venous system. Clostridial species can also be found in the bile of healthy individuals undergoing cholecystectomy9,10.
The donor liver can also be the source of bacteria. Donors may have conditions that favor the growth of bacteria in bile or the translocation of bacteria into the portal venous blood. These conditions include trauma to the gastrointestinal tract, prolonged intensive care unit admissions, periods of hypotension, use of inotropic agents, and other conditions that increase the risk of potential infection 8,9,10. C. perfringens sepsis in OLT recipients has been uniformly fatal without emergent retransplantation. Survival from C. perfringens sepsis managed without exploratory laparotomy or emergency treatment has been extremely rarely reported8. In those patients who survived, and in whom the infection has not resulted in shock or multiple organ failure, retransplantation may be successful8.
Although our patient survived his intensive care course, his recovery was tenuous as he quickly developed additional hepatic abscesses that led to his eventual demise. Post-mortem examination in our patient revealed intra-hepatic presence of Clostridium perfringens.
He was managed conservatively since he markedly improved both clinically and by liver function tests. Because of this, retransplantation was delayed. He was also already on antiplatelet prophylaxis.
Conclusion:
We report an interesting case of Clostridium perfringens hepatic abscess due to late HAT following OLT. Although the patient initially improved with non-surgical treatment, he eventually died. In similar cases, besides aggressive work-up and medical management retransplantation may be necessary for a better long term outcome.
Fear of physicians, injections, operations, the operation theatre and the forced separation from parents make the operative experience more traumatic for young children and can cause nightmares and postoperative behavioural abnormalities. Preanaesthetic medication may decrease the adverse psychological and physiological sequelae of induction of anaesthesia in a distressed child1. An important goal of premedication is to have the child arrive in the operating room calm and quiet with intactcardiorespiratoryreflexes. Various drugs have been advocated as premedication to allay anxiety and facilitate the smooth separation of children from parents. The idealpremedicantin children should be readily acceptable and should have a rapid and reliable onset with minimal side effects. Midazolam has sedative and anxiolytic activities, provides anterograde amnesia, and has anticonvulsant properties2. Ketamine, on the other hand, provides well-documented anaesthesia and analgesia. It has a wide margin of safety, as the protective reflexes are usually maintainedOral premedication with midazolam and ketamine became widely used inpaediatric anaesthesiato reduce emotional trauma and ensure smooth induction. It provided better premedication than either oral ketamine or midazolam alone4, but excessive salivation and hallucination were observed5.
Dexmedetomidine is a highly selective α2-adrenoreceptor agonist drug. Clinical investigations have demonstrated its sedative, analgesic and anxiolytic effects after IV administration to volunteers and postsurgical patients6. It has been used to sedate infants and children during mechanical ventilation and also to sedate children undergoing radiological imaging studies,8In the literature, few articles have used dexmedetomidine orally for the premedication of children. The purpose of this study is to evaluate the efficacy of dexmedetomidine when administered orally as a hypnotic and anxiolytic agent compared to oral combination ketamine/midazolam as preanaesthetic medication in paediatrics.
Methods:
The Hospital Ethics Committee approved the protocol. Written informed consent was obtained from parents prior to inclusion. Sixty six children of ASA physical status I or II, aged between 2 and 6 years and scheduled for elective minor surgery of more than 30 minutes expected duration were enrolled in this prospective, randomized, double-blind study. Exclusion criteria were: a known allergy or hypersensitivity reaction to any of the study drugs, organ dysfunction, cardiac arrhythmia or congenital heart disease, and mental retardation.
Children were randomly allocated to one of the two study groups using computer-generated random numbers. Group D received oral dexmedetomidine 3 μg/kg and group MK received 0.25 mg/kg oral midazolam (up to a maximum of 15 mg) with 2.5 mg/kg oral ketamine. The oral premedication was mixed with 3 ml of apple juice as a carrier to be given thirty minutes before induction of anaesthesia. The oral route was chosen as it is the most acceptable and familiar mode of drug administration. An independent investigator not involved in the observation or administration of anaesthesia for the children prepared all study drugs. Observers and attending anaesthetists who evaluated the patients for preoperative sedation and emergence from anaesthesia were blinded to the drug administered. Children had premedication in the preoperative holding area in the presence of one parent. All children received EMLA cream unless contraindicated.
After drugs were administrated, the following conditions were observed: 1) response to drug and onset of sedation, 2) response to the family separation circumstance and the entrance to the operating room, 3) response to the venous line (IV) insertion, 4) ease of mask acceptance during induction of anaesthesia. The time to recovery from anaesthesia and to achieve satisfactory Aldrete score were also noted. Onset of sedation was defined as the minimum time interval necessary for the child to become drowsy or asleep.
Sedation statuswas assessed every 5 min for up to 30 min with a five-point scale. A score of three or higher was considered satisfactory. In addition anxiolysis was assessed on a four-point scale. An anxiety score of three or four was considered satisfactory. Cooperation was assessed with a four-point scale. A cooperation score of three or four was considered satisfactory. Taste acceptability was evaluated on a four-point scale. A score of 1–3 was considered satisfactory.
Score
Sedation
Anxiolysis
Cooperation
Taste
1
Alert/active
Poor
Poor
Accepted readily
2
Upset/wary
Fair
Fair
Accepted with grimace
3
Relaxed
Good
Good
Accept with verbalcomplaint
4
Drowsy
Excellent
Excellent
Rejected entirely
5
Asleep
Heart rate, blood pressure, respiratory rate and arterial oxygen saturation were recorded before premedication, every five minutes for 30 min preoperatively, and then during induction of anaesthesia, every 5 min intra-operatively, every 15 min in recovery room and every 30 min in day-case unit until time of discharge.
The anaesthetic agents administered were standardized.Children were induced with sevoflurane, nitrous oxide in oxygen and fentanyl 1-2 µg/Kg and maintained with the same drugs. The trachea was intubated after administering cisataracurium 0.1 mg/kg.
At the end of the procedure, the neuromuscular blockade was reversed with neostigmine with glycopyrolate and the child was extubated. After that, they were kept in the recovery room (PACU) under observation until discharge. The time to recovery from anaesthesia and to achieve satisfactory Aldrete score were noted. The discharge time was also noted and postprocedure instructions were given. Children were called for checkups the following day, when parents were asked to answer a questionnaire about the surgical experience of the parent and child and side effects experienced, if any.
Statistical analysis was performed using SPSS version 17. All values were reported as mean ± SD and range. Data analysis for numerical data was performed by unpaired Student’s t-test to detect the differences between the groups for age, weight, onset of anxiolysis and sedation. Data analysis for categorical data was performed by Fisher’s exact test to detect differences for the scores. Other data are reported as mean ± SD or frequency (%). A P value < 0.05 was considered statistically significant. Prior to the study, we chose the null hypothesis (i.e. nosignificantsedation scores between the groups). The number of patients required in each group was determined using power analysis based on previous studies. Assuming that 79% of patients would become drowsy or asleep in the midazolam/ketamine group (15 patients), a sample size of 30 patients per group would have an 80% power of detecting a 20% difference in sedation (from 79% to 99%) at the 0.05 level ofsignificance. We decided to study 66 patients to account for possible dropouts.
Results:
Sixty-six patients were enrolled; four did not receive the study medication and two did not have surgery on the same day, leaving 60 subjects who fulfilled the criteria for the study.Groups were comparable regarding age, sex, weight, ASA physical status, surgical interventions and duration of anaesthesia (Table 1). Operative procedures were evenly distributed and included inguinalherniorrhaphy, hydrocele repair or orchidopexy.
Table 1: Demographic characteristics and duration of anaesthesia:
Group D
Group MK
No of patients
33
33
No of patients excluded
4
2
Age (years)
4.02±1.98
4.2±1.45
Gender (female/male)
13/16
15/16
ASA (I/II)
25/4
25/6
Weight (Kg)
17.72±4.4
16.56±5.1
Duration of Anaesthesia (min)
35.17±5.9
32.7±8.4
Data are expressed as mean ± SD (range). P > 0.05. No significant difference among groups. Dex group (D). Midazolam Ketamine group (MK). ASA, American Society of Anesthesiology physical status.
Onset of sedation was significantly faster after premedication with midazolam/ketamine (Fig1), and the level of sedation was significantly better after premedication with midazolam/ketamine 30 minutes after ingestion of the premedicant.
The anxiolysis score revealed 84 % of children in group MK as being friendly and only 51% of children in group D have similar behaviour (Table 2). The taste of oral dexmedetomidine was judged as significantly better; 13% of children rejected the oral midazolam/ketamine combination (Table 2).
Table 2: Distribution of behaviour and sedation status at time of induction:
Group D
Group MK
P
Time to onset of sedation (min)
24.52 ± 3.1
18.36 ± 2.6
0.015*
Preoperative sedation score
1.6±0.5
3.1±0.8
0.003*
% asleep at induction
61%
90%
0.024*
Preoperative anxiolysis score
1.4±0.6
2.9±0.7
0.016*
% Face mask acceptance
58%
88%
0.033*
% Venous line insertion acceptance
72%
90%
0.005*
% Satisfactory parental separation
50%
80%
0.04*
% Parental satisfaction
70%
90%
0.036*
% Taste acceptance
97%
87%
0.002*
Data are expressed as mean ± SD (range) or percentage. Dex group (D). Midazolam Ketamine group (MK). * significantP <0.05.
Application of a facemask at induction of anaesthesia was accepted more readily in patients of group MK (Fig 2).Overall, satisfactory cooperation with venous line insertion was found in 90% of children in group MK, while comparatively 72% of children in group D showed satisfactory cooperation with insertion of a venous line (Table 2). Moreover, most of the MK treated children were more calm and sedated than the D-treated group at the time of separation from parents. Parental satisfaction was significantly higher in group MK.
The time interval from end of surgery to spontaneous eye opening in the PACU was significantly less in group D (Fig 1), while the time to discharge from the PACU to ward was similar for groups (Table 3).
Table 3: Time to eye opening and PACU discharge
Group D
Group MK
P
Time to eye opening (min)
21±4.3
30±6.1
0.032*
Time of PACU discharge (min)
30± 3.9
28.12±5.5
0.316
Data are expressed as median ± SD (range). Dex group (D). Midazolam Ketamine group (MK). * significantP < 0.05.
While no child experienced respiratory complications or arterial oxygendesaturationbefore induction, heart rate and systolic blood pressure were marginally higher after administration of MK. On the other hand, the mean heart rate and systolic blood pressure measurements were 15% lower (than preoperative values) in group D at the same study periods. However, during recovery, haemodynamic responses were similar.
Adverse events were recorded for the three periods. Two children in group MK as well as one in group D experienced nausea but only one patient in group MK vomited before induction. Hallucination was recorded in 10 % of patients in group MK. Excessive salivation occurred in 12% of children receiving the combination of drugs, compared to 7% in D-treated children.
Discussion:
Our study proved that midazolam/ketaminereceiving patients were significantly calmer and more cooperative compared to dexmedetomidine receiving patients during the preoperative period, the insertion of a venous line, during separation from parents and also during the application of a facemask at induction. Several studies have been published demonstrating the advantage of the midazolam/ketamine combination in paediatric premedication4,9, while others have reported superiority of oral dexmedetomidine premedication to oral midazolam10,11.
Based on their experience with using oral dexmedetomidineas a preanaesthetic in children, Kamal et al10 and Zub et al 12 reported that the dose of 3 μg/kg could be safely and effectively applied without haemodynamic side effects.
Midazolam is currently the most commonly usedpaediatric premedication due to easy application, rapid onset, short duration of action and a lack of significant side effects13. Meanwhile oral ketamine was used in the 1970s by dentists to facilitate the treatment of mentally handicapped children. In 1982, Cetina found that rectal or oral preanaesthetic ketamine is an excellent analgesic and amnesic agent with no incidence ofdysphoric reactions, possibly related to its high rate of first-pass metabolism14. The metabolite norketamine has approximately one-third the potency of ketamine, but reaches higher blood concentration and also causes sedation and analgesia 15. The use of midazolam and ketamine in combination as a premedicant combines their properties of sedation and analgesia and attenuates drug induced deliriumGhai et al and Funk et al have also reported that a combination of midazolam and ketamine results in better premedication than the individual drugs given alone4,9.
Like clonidine, dexmedetomidine possesses a high ratio of specificity for the α2 versus the α1 receptor (200: 1 for clonidine and 1600: 1 for dexmedetomidine). Through presynaptic activation of the α2 adrenoceptor, it inhibits the release of norepinephrine and decreases sympathetic tone. There is also an attenuation of the neuroendocrine and haemodynamic responses to anaesthesia and surgery, thereby leading to sedation and analgesia16. One of the highest densities of α2 receptors has been detected in the locus coeruleus, the predominant noradrenergic nucleus in the brain and an important modulator of vigilance. The hypnotic and sedative effects of α2-adrenoceptor activation have been attributed to this site in the CNS16. This allows psychomotor function to be preserved while letting the patient rest comfortably, so patients are able to return to their baseline level of consciousness when stimulated17. Clonidine and dexmedetomidine seems to offer the beneficial properties, but dexmedetomidine has a shorter half-life, which might be more suitable for day surgery. Zuband his colleagues reported that dexmedetomidine may be an effective oral premedicant prior to anaesthesia induction or procedural sedation and it was effective even in patients with neurobehavioural disorders in whom previous attempts at sedation had failedAlso Sakurai et al reported that oral dexmedetomidine could be applied safely and effectively as a preanaesthetic in children18.
While dexmedetomidine is tasteless and odourless17 , with 82% bioavailability after extravascular doses in healthy human adults19, oral midazolam formulations have a bitter taste and were usually prepared by mixing the IV midazolam with a variety of sweet additives. In our study, children judged the taste of oral dexmedetomidine as significantly better than oral midazolam ketamine mixture, although both drugs were given with the same sweet tasting syrup. This observation probably might also reflect the developmental age of these patients and the difficulty of gaining their cooperation in swallowing something that they did not wish to swallow. Recently, new commercially prepared oral midazolam formulations are reported to be more palatable20, but unfortunately, it is not available yet in our country.
Our data confirmed that onset of sedation and peak sedative effect was significantly slower after oral dexmedetomidine compared to oral midazolam ketamine. These results are consistent with studies by Kamal et al and Schmidt et al who reported slow onset of action of oral dexmedetomidine,21In addition, Anttila et al reported that, in adults after oral administration, peak plasma concentration is achieved at 2.2 ± 0.5 h after a lag-time of 0.6 ± 0.3 h19.
In this study, dexmedetomidine premedication with the present study design resulted in slight hypotension and bradycardia, which could be attributed to postsynaptic activation of α2 adrenoceptors in the central nervous system (CNS) that inhibit sympathetic activity and thus can decrease blood pressure and heart rate22. In a finding consistent with our results, Khan et al and Aantaa et al reported that useofdexmedetomidine can beassociatedwithsome cardiovascular side effects including hypotension and bradycardia,24Conversely, Ray and Tobias did not find significant haemodynamic changes when used dexmedetomidine in providing sedation during electroencephalographic analysis in children with autism and seizure disorders25.
There were some limitations to this study; the bioavailability of oral dexmedetomidine is based on the adult dataWe need to decide the timing of the oral administration as apremedicantbased on the data in children. Therefore, the bioavailability of oral dexmedetomidine needs to be studied in children. The premedication period was 30 min, however, if a longer premedication period had been allowed, possibly more subjects could have attained satisfactory sedation at separation from parents and at induction of anaesthesia.
Conclusion:
In this study, premedication with oral midazolam/ketamineappeared to be superior to oral dexmedetomidine with evident haemodynamic stability and a higher degree of parental satisfaction demonstrated, although oral dexmedetomidinewas more accepted by the children. No significant side effects were attributable to either premedication. Emergence from anaesthesia was comparable between groups.
Payment by results was introduced across the National Health Service (NHS) in 2005. It’s aim was to provide a pricing structure (tariff) for the whole country with some allowance for geographical variation1-2. The system uses Healthcare Resource Group codes (HRG) in which treatments in similar cost brackets have the same codeA price / tariff is derived from each hospital patient episode and the patient’s registered Primary Care Trust (PCT) is billed accordingly.
In order to generate an HRG code data is collected by the hospital clinical coding department including primary diagnosis, comorbidity (which incurs an extra charge if applicable), and complications, surgical procedure, age and duration of stay4. Diagnoses (either primary, co morbidities or complications) are coded using ICD-10 codes. Surgical procedure is defined using OPCS-4 codesA piece if software is then utilised to allocate the HRG code. Each HRG code represents a tariff, which is the average cost of a treatment nationwide. Minor regional adjustments are made to reflect the cost of living2.
Payment by results covers all admissions, attendance in accident & emergency departments and outpatients attendances5. The 2004 NHS Improvement plan designated 18 weeks as a target for referral to treatment (RTT)6. It is a common misconception that trauma patients do not account for considerable income within the NHS. Trauma is often seen as the poor relation when compared with elective work where a target based culture now prevails. Elective targets must be met or hospital trusts can incur financial penalty. This situation is not apparent for trauma due to the acute nature of service delivery in the majority of cases. The burden of trauma work can block elective admissions and is seen by some as a barrier to target attainment. At least 36% of orthopaedic surgeons in the United Kingdom describe trauma as part of their sub-specialist interestWe aimed to assess the throughput and income generated from one week of trauma workload and compared this with the elective throughput in our unit for the same week. This was performed by means of a prospective study. We are not aware of any published work in this specific area.
Methods:
We followed all acute patients admitted to our trauma unit between 21/02/2008 and 28/02/2008. This represented a “trauma week” which is how the consultant rota is organised in our trust. We then compared this with the throughput in our elective unit for the same calendar period. No surgeons were on leave this week and no theatre sessions were cancelled other than the on call trauma consultant’s elective operating sessions. Our trust is a busy district general hospital with over 500 beds and approximately 55,000 emergency attendances per year. The orthopaedic directorate is staffed by ten full time consultants and serves a population of 315,000 patients.
All patient details were recorded prospectively and followed until the end of their inpatient episode. Case notes were then reviewed with the coding department and ICD-10 and OPCS-4 codes were generated. Their length of stay and other required variables were reviewed in order to generate the correct HRG code. Once the analysis was complete income for the trauma and elective groups were calculated.
Results:
Trauma:
48 patients were admitted (22 male) of which 36 required operative intervention. This utilised 14 theatre sessions. Mean age was 53.75 years (range: 7-93, median: 59). Median stay was 4 days with a mean of 13.3. The median and mean trim points (expected duration of stay before extra charges incurred by PCT) were 14.5 and 26.7 days respectively. Other consultants operated on 6 patients. This was either due to expertise in a specific area or space on an elective list utilised to reduce backlog. The income generated by these cases is included in the trauma total due to them being acute trauma interventions rather than elective cases. These results are summarised in tables 1 and 2.
Table 1: Demographic & Income Data of Trauma and Elective Patients
Trauma
Elective
Median age (yrs)
59
47
Number of Patients
47
71
No of Males
26
30
Median stay (days)
4
1
Range of stay (days)
1 – 107
1 - 7
Total bed days
637
118
Estimated Bed Costs (£)
203,840
26,550
Mean income per pt (£)
3658.32
2117.15
Total Income (£)
171,941
150,318
Table 2: Income by Anatomic Region
Trauma
Elective
Length of stay (median) days
No of pts
Total income (£)
Mean income per patient (£)
Length Of Stay (median) days
No of pts
Total income (£)
Mean income per patient (£)
Upper limb
1
18
32,455
1,803
1
9
11,469
1,274
Spine
1
5
10,327
2,065
0
34
44,887
1,320
Hip
26
13
90,891
5,494
4
12
43,660
3,638
Knee
5
5
19,576
3,915
2
12
45,434
3,786
Foot and ankle
6
7
18,692
2,670
1
4
4,868
1,217
Total
47
171,941
3658
71
150,318
2117
Of the 48 patients admitted 12 required no operative intervention. These cases were general ‘run of the mill’ admissions such as soft tissue infections for intravenous antibiotics, undisplaced fractures where home circumstances obstructed discharge, soft tissue injuries for further investigation and back pain. These will not be discussed further but the income generated (£31,127) does go towards the total. The median stay was 2 days with a mean stay 8.5 days (range: 1 – 47). This reflects the broad comorbidities and social circumstances of this subset.
The group requiring operative intervention included hip fractures (11 patients). Of these, seven required dynamic hip screw fixation but were deemed “complex” due to their comorbidities and therefore attracted the higher tariff rate (£6685). One displaced intracapsular fracture required total hip replacement, attracting a tariff of £7261. One patient required revision from a dynamic hip screw to an intramedullary device and then revision to a total hip arthroplasty. The tariff price was £19,479. The remaining fractured neck of femur patients attracted between £4379 and £6711 dependent on operative procedure. The median stay was 26 days (mean: 14, range 9 – 107). One patient required closed manipulation of a dislocated total hip replacement attracting a tariff price of £1034 and an inpatient stay of one day. In addition one acetabular fracture was sustained requiring open reduction and internal fixation. It attracted a tariff price of £4262 and an inpatient stay of seventeen days.
One patient required open reduction and internal fixation of a patella fracture attracting a tariff of £2405 and was an inpatient for 10 days. Another patient with septic arthritis required two arthroscopic knee washouts, attracting a tariff of £5941 and was an inpatient for 26 days.
Seven ankle fractures were admitted requiring operative intervention, all of these attracted a tariff of £2405 except one, which attracted £4262 due to co morbidity and complexity of injury. The median stay in this group was six days (mean: 4.9, range: 2-7).
Thirteen patients sustained hand and wrist injuries requiring operative intervention. Of these there were two tendon repairs, two abscesses drained and one digital terminalisation. Five wrist fractures required either manipulation and plaster application, closed reduction and Kirschner wiring or open reduction and internal fixation by means of a volar plate. Three fractures of the base of the thumb were manipulated and percutaneously K-wired. These patients attracted a tariff of between £1048 and £3227. Median stay was one day (mean: 1.36, range: 1 – 3). Three of these cases were managed by our hand surgeon on a trauma list.
One patient admitted with cauda equina syndrome required microdiscectomy attracting a tariff of £1271 and was an inpatient for one day. This was performed by one of our spinal surgeons on a trauma list.
Elective:
71 procedures were performed (36 female). This utilised 22 theatre sessions. Mean age was 49.51 years, (11 – 87 median: 47). Mean stay was 2.3 days. The median and mean trimpoints were 2 and 6.35 days respectively. Cases were divided by anatomical region. A table of income for both trauma and elective patients by anatomical region is included (Table 2).
Twelve patients had hip procedures performed. These included hip injections (n=2, tariff £615), sciatic nerve exploration (n=1, tariff £1217), cemented total hip arthroplasty (n=2, tariff £4304), uncemented total hip arthroplasty (n=1, £5305), resurfacing hip arthroplasty (n=5, £4023) and revision hip arthroplasty (n=1, £7185).
Twelve patients had knee procedures performed. These consisted of total knee replacements (n=3, tariff £5613), unicompartmental knee replacements (n=4, £5613), one anterior cruciate ligament reconstruction (£1863), knee arthroscopies (n=2, tariff £1063), one removal of metal work (tariff £1063) and one scar revision (tariff £1091).
Four patients had foot and ankle procedures performed and these all attracted £1217 tariff price. They consisted of one ganglion excision, one hallux valgus correction, one excision of Morton’s neuroma and one ankle arthroscopy.
Nine patients had upper limb procedures performed. These comprised carpal tunnel decompression (n =1 £1217), radial head excision (n=1 £1217), shoulder stabilisations (n=3 £1217), subacromial decompression (n=1 £1217), acromiclavicular joint excision (n=1 £1063), diagnostic shoulder arthroscopy (n=1 £1217) and arthroscopic cuff repair (n=1 £1887).
34 patients had spinal procedures performed. Inpatient stay ranged from 0 to 5 days with trimpoints of 1 – 13 days. These ranged from nerve root injections (n=23, tariff £522), discography (n=3, tariff £615), microdiscectomy and interspinous distraction (n=2, tariff £3192), decompression, fusions and instrumentation (n= 5, tariff £4252 - £5140), and kyphoplasty (n=1, tariff negotiated: no HRG code. Income £1506). Total income for the spinal group was £44,887.
It can be seen from the data that a wide range of trauma and elective surgery was performed and that the elective group was admittedly younger and had a shorter hospital stay (Table 1). Our unit has the benefit of two spinal surgeons who operate a local and tertiary practice, which changes the demographic of our cohort slightly; other units may not have this factor adjusting their income.
The tariff income for the elective group was £150,318, which was lower than that for the trauma group of £171,941.
Discussion:
This paper is, as far as we are aware the first to compare elective and trauma orthopaedic throughput in a busy district general hospital. It would be bold not to draw attention to our studies limitations. We analysed only one week in the financial year and we accept that seasonal variation may occur. The weather for the week in question involved no snow or ice and was warmer than average for this time of year (5.2°C)10. We do not feel that severe weather influenced our admissions. Previous studies have assessed the effect of seasonal variation on admissions rate. One was in a winter sports resort in Switzerland and unsurprisingly showed a positive correlation between season and fracture incidence11. Another study based in Tasmania showed no variation in either vitamin D levels or incidence of femoral neck fracture12. This goes against the findings of a study based at three latitudes, which showed a high seasonal peak in Scotland, Hong Kong and New ZealandOur locality has a temperate climate with no local winter sports resorts; our experience of seasonal variation is minor.
Miscoding and therefore error in calculations may have occurred; as both the authors and experienced coders reviewed the casenotes the likelihood of this is limited.
Our most important finding was that the mean income per trauma patient (£3658.32) was higher than that for an elective patient (£2045.13) and was statistically significant (p=0.001). The HRG code and income generated represents the money actually received by the hospital from the primary care trust. We openly admit that trauma patients represent a larger burden for the hospital. They have a tendency to be older, have complex co-morbidity and have increased length of stay. They are therefore more costly than elective patients. One study performed in a large university hospital calculated the mean cost for a hip fracture to be £8978.56 (range £3450 - £72,564), this rose to £25,940.44 if there was a superficial wound infection (range £4387 - £93,976) and £34,903 if there was a deep infection (range £9408 - £93,976)14.
Although actual income from the PCT was higher the trauma group will have been loss making on account of the hip fracture group. Whilst this is hard to quantify it seems likely given the calculations portrayed in the Nottingham study of 3686 patientsInpatient costs for the trauma group ignoring theatre costs amount to approximately £204,000. This exposes a lack of appreciation of this group’s requirements in comparison with fit elective hip patients and probably inequality in trauma coding for these patients.
Our study has not tackled implant costs partly due to the fact that inpatient costs have significantly dwarfed these but also due to the fact that we consider these a relatively fixed overhead, costs being determined by local bulk purchase agreements. The consequence on overall study outcome would be minimal given that trauma implants are several orders of magnitude cheaper than elective joint prostheses.
It became apparent to us during the course of our study that trauma can be under resourced when compared with elective care. The background team currently provided for trauma patients include the on call medical team (Consultant Orthopaedic Surgeon, Specialist Registrar and Senior House officer). In addition there are ward nursing staff, anaesthetist, theatre staff, occupational therapists and physiotherapists. On the elective side there are 4 waiting list clerks, 3 surgical assistants, 3 preoperative clinic sisters as well as reception staff and the background medical team (anaesthetist, consultant orthopaedic surgeon, specialist registrar and senior house officer). In the elective setting the aim is identification and optimisation of comorbidities pre-operatively and discharge planning to ensure throughput and turnover of patients. We admit that pre admission screening is not applicable to trauma but faster throughput could ensure improved efficiency and reduced duration of stay.
Our elective patients have a 30-bed ward with an additional 8-bed day case unit; the trauma ward has 24 inpatient beds. The elective unit has 7 registered nurses and 4 health care assistants; on the trauma ward this figure is 4 and 3 respectively. Our elective patients have 2.5 full time equivalent physiotherapists whilst our trauma patients have 1.5.
This situation is probably not dissimilar to the situation in many units elsewhere in the country. This work has shown that trauma income is higher than that for elective work and from this we can infer that if resources were directed accordingly then length of stay could be reduced and profit could be a possibility. A recent paper using hospital episode statistics (HESS) data has shown that length of stay fell quickly once payment by results was implemented15. What was unclear was whether this represented a real change in efficiencies or simply a change in data manipulation by trusts. HESS data has repeatedly been noted to be inaccurate with a range from 10 to 98% dependent on region and disease group.16-17. In a 2006 statement by the then Health Minister Mr. A Burnham it was quoted that £88m pounds was being wasted from 390,000 extra unnecessary bed days18. This was based on the cost of an elective bed being £225 per day with acute beds being significantly more (approximately £320 in one study)The total stay for 66 elective patients was 118 days whereas that for 48 trauma patients was 637 days. Several outliers hugely increased the figure for trauma. Ten trauma patients represented 464 days of inpatient care. If the inpatient stay was reduced by one day for fractured neck of femur patients alone, this amounts to 500 less days per year and approximately £160,000 per year reduction in overhead costs for the trust.
One study in the USA assessed the use of a caseworker to expedite discharge for elderly patients with hip fractures19. The study did not utilise extra physiotherapy and occupational therapy support. Findings were increased theatre, anaesthetic and blood product costs in elderly patients. Increasing age did not correlate with length of stay, cost of stay or income for the hospital. They found that a case manager did reduce the average stay but did not reduce the overall cost. The NHS would do well to note these findings - in many trusts patient flow practitioners are being employed to try and expedite discharge and increase patient turnover. We feel that this money could be channelled into rehabilitation services to effect prompt rehabilitation and discharge.
One final issue is the variation in income between secondary and tertiary centres for certain injuries. One acetabular fracture underwent fixation generating £4262. If this had been referred to a tertiary centre a supplementary specialised service code would have been applicable generating more income (up to 70% in some cases) when intervention was identical. We agree that certain injuries require tertiary treatment by a team with high volume experience and specialised skills. There is an income chasm between the income generated between secondary and tertiary centres for the same injury, which seems perverse.
Overall trauma income was higher than elective income, but still ran at a loss. This was on account of the length of stay of the hip fracture patients and current coding underestimating their true cost to the trust. There is a disparity between rehabilitation services provided for trauma and elective patients, which needs to be addressed to improve efficiency.
The Royal College of Psychiatrists defines the 'graduates' as people who have had enduring or episodic severe mental disorder in adulthood and have reached the age of 65 years. Estimates of the most severely affected range from 11 to 60 per 100 000.1 This group of people seems to be uniquely disabled by a combination of social, mental health and physical disadvantages and there is a risk of falling between general adult, rehabilitation services and old age psychiatry.2
There has been an ongoing debate about identifying the best practice in the management of this group of patients who often have spent most of their lives in the old psychiatric asylums. The recommendations include identifying all graduates within the service followed by a full assessment of the patients' health and social care needs and the implementation of a care plan to meet these needs, to be reviewed at least annually. According to the report, the medical responsibility will rest with a principal in general practice or a consultant psychiatrist, and maintenance of continuous review should be the responsibility of the case manager.1
The Recovery and Rehabilitation Team (RRT) in Newham was founded in 1988 to facilitate the discharge of groups of patients from Goodmayes hospital (Essex). Patients discharged to residential care units and other supported schemes usually had spent many years in the institution and the team's remit after relocation into the community was mainly monitoring of mental health by conducting multiprofessional reviews in the care homes, crisis intervention, and the promotion of social networks and leisure activities. Over the following years, the team also received many referrals from Community Mental Health Teams (CMHT) for continuing care of people suffering from long term and severe mental illness. Today, a considerable proportion of these patients have 'graduated' into old age and the current percentage of the total caseload is now nearly 25%.
Our survey was carried out following an independent review by the Health and Social Care Advisory Service (HASCAS) in January 2005 for the rehabilitation services provided in the London Borough of Newham. The recommendations included an assessment of needs for all patients 65 years of age or over, using the Camberwell Assessment of Needs for the Elderly (CANE) .3This is a comprehensive needs assessment tool suitable for use in a variety of settings. It has been successfully used for older people in primary care, sheltered accommodation, residential homes, nursing homes, and mental health services for older people. However, it has not been used before to specifically assess the needs of older people who have graduated within the general adult mental health or rehabilitation services. CANE was found to be a valid and reliable tool and easy to use by different professions.4
Method:
The RRT database was searched for all patients aged 65 years or over. This yielded 52 names, who were then approached between June and September 2005 for a comprehensive assessment after an explanation about the survey. CMHTs were asked for numbers of graduates in their services, obtained from the respective databases.
The CANE is a structured, 24-item questionnaire covering different areas (see table 2), including social, psychological, mental health and physical needs. It is easily applicable by different professions and requires on average about one hour of assessment time. It measures met and unmet needs and obtains views from patients, carers, staff and the rater. Assessments were carried out by members of the multi-disciplinary team that consists of a consultant psychiatrist, the team manager, two senior clinical medical officers, two clinical psychologists, two occupational therapists, two social workers, five community psychiatric nurses and four community support workers. All raters had received a one day training provided by Juanita Hoe, one of the contributors in producing the CANE.
The collected data were analysed using Microsoft Excel.
Results
The total number of patients aged 65 years and above under the care of the rehabilitation services was 52 (24.5% of the total caseload of 212 patients). There were a further ten patients under the care of the adult CMHTs in Newham. Attempts were also made to determine the number of the graduates under the care of mental health services for older people, but these were unsuccessful.
Out of the 52 patients, 50 could be assessed using the CANE, two patients declined the assessment and the assessment sheet of one patient could not be traced, giving a total of 49 patients and a response rate of 79% of all known 'graduate' patients under the care of adult mental health services.
Results describing patient characteristics including mean age, gender, type of accommodation and diagnosis, are summarized in Table 1.
Table: 1 Demographic Details
Variable
Mean Age (years)
72.16
Gender (n(%))
Female
16(32.65%)
Male
33(67.34%)
Type of accommodation (n(%))
Residential care
25(51%)
Supported accommodation
13(26.53%)
Private accommodation
12(24.48%)
Diagnosis (n(%))
Schizophrenia
33(67.34%)
Schizoaffective Disorder
6(12.24%)
Bipolar Affective Disorder
5(10.20%)
Depression
2(4.08%)
Personality Disorder
1(2.04)
OCD
1(2.04%)
Dysthymic Disorder
1(2.04%)
Nearly two-thirds of patients were female, three-quarters of this population were living in supported living or residential care and 90% were suffering from a severe mental illness (two-thirds from schizophrenia).
The met and unmet needs of this population are described in table 2.
Table 2: Levels of needs as rated by the rater (n=49)
Item
No Need
Met Need
Unmet Need
Not Known
n
(%)
n
(%)
n
(%)
n
(%)
Accommodation
22
44.90%
22
44.90%
2
4.08%
3
6.12%
Household skills
5
10.20%
41
83.67%
3
6.12%
0
0.00%
Food
9
18.37%
34
69.39%
6
12.24%
0
0.00%
Self-care
12
24.49%
31
63.27%
6
12.24%
0
0.00%
Caring for other
47
95.92%
2
4.08%
0
0.00%
0
0.00%
Daytime activities
16
32.65%
14
28.57%
18
36.73%
1
2.04%
Memory
34
69.39%
4
8.16%
5
10.20%
6
12.24%
Eyesight/hearing
24
48.98%
14
28.57%
10
20.41%
1
2.04%
Mobility
26
53.06%
18
36.73%
5
10.20%
0
0.00%
Continence
28
57.14%
16
32.65%
3
6.12%
2
4.08%
Physical health
14
28.57%
29
59.18%
6
12.24%
0
0.00%
Drugs
17
34.69%
30
61.22%
2
4.08%
0
0.00%
Psychotic symptoms
18
36.73%
28
57.14%
3
6.12%
0
0.00%
Psychological distress
29
59.18%
14
28.57%
6
12.24%
0
0.00%
Information
28
57.14%
11
22.45%
6
12.24%
4
8.16%
Safety(deliberate self harm)
44
89.80%
4
8.16%
0
0.00%
1
2.04%
Safety(accidental self-harm)
35
71.43%
11
22.45%
2
4.08%
2
4.08%
Safety(abuse or neglect)
35
71.43%
10
20.41%
4
8.16%
1
2.04%
Behaviour
32
65.31%
12
24.49%
4
8.16%
1
2.04%
Alcohol
47
95.92%
2
4.08%
0
0.00%
0
0.00%
Company
29
59.18%
8
16.33%
11
22.45%
1
2.04%
Intimate relationship
40
81.63%
3
6.12%
4
8.16%
2
4.08%
Money
21
42.86%
19
38.78%
9
18.37%
0
0.00%
Benefits
37
75.51%
3
6.12%
4
8.16%
5
10.20%
Regarding unmet needs, the highest value (nearly 37%) was on daytime activities, which 18/49 people scored. This is followed by company (22.5%), which was a problem for 11 people. Eyesight or hearing also scored strongly (20.5%), followed by money (18.4%) and different problems in areas such as food and self-care, physical health and psychological distress (each 12%). Problems with suicidal behaviour and drug or alcohol abuse were not evident in terms of unmet needs.
Discussion
Our results show that the majority of needs identified by the CANE were adequately met by the current service provision or were only identified as unmet needs by a tiny minority (table 2). Since the vast majority of the patients were living in either residential or supported accommodation (25.51% and 26.53% respectively), items associated with domestic needs and activities appeared to be met to a great extent, e.g. accommodation (44.90% no need, 44.90% met need).
In terms of items related to mental state, the majority of patients seemed to be satisfactorily managed and receiving appropriate treatment. The raised number of patients who suffered from psychological distress could be explained by other psychosocial factors such as lack of daytime activities and lack of company which have been identified as the major unmet needs in our population.
A recent article,5 named risk of harm, unpredictability of behaviour, poor motivation, lack of insight and low public acceptability as the major reasons for social disability. However, in our review, over one-third of people clearly expressed the wish for more daytime activities, where the named disabilities might prevent a more active and satisfied lifestyle. In the interviews, it transpired that people mostly wished for an outreach service providing social contact, befriending and activities. The majority of people in our population seemed to be rather reluctant to access general facilities, like day centres for the elderly.
As we have assessed most of the patients under the care of adult mental health services, this survey should be able to inform service planning about the needs of this population. The development of an outreach service offering day time activities including a befriending component could be a challenge for the responsible service providers, e.g. social services, adult community mental health services and old age psychiatry.
The specific physical needs (especially eyesight and hearing) make it necessary for services to monitor these closely and implement this in the care plan in liaison with General Practitioners.
Similar reviews should be undertaken by community mental health services in other boroughs to highlight the needs of this specific group of patients, as the respective unmet needs might be dependent upon the level of service provision.
BJMP September 2012 Volume 5 Number 3
BJMP September 2012 Volume 5 Number 3
Research Articles
Arif Hussain Sarmast, Hakim Irfan Showkat, Asim Mushtaq Patloo, Fazl Q Parray, Rubina Lone and Khurshid Alam Wani
Sohail Abrar, Ahmed Shoka, Noman Arain and Candice Widuch-Mert
Review Articles
Imran Majid and Abid Keen
Paul C. Breeding, Nancy C. Russell and Garth L. Nicolson
Case Reports/Series
Nicholas Port and Asquad Sultan
Nair CV and Kadies MA
Clinical Practice
AM Kassam, Professor P Dieppe and AD Toms
Education and Training
Kathryn Critchley
Adeel Meraj