Global recruitment in psychiatry has been falling for several decades because medical students and graduates have been finding it consistently unattractive 1,2. An analysis of the career choices of newly qualified doctors in the United Kingdom (U.K.) found the same trend from 1974 to 2009; psychiatry was the first career choice for only 3-5% of medical graduates annually3. In the U.K., lack of recruitment into psychiatry had reached a crisis point by 2003 when 15% of all unfilled consultants posts in England were in psychiatry and the Royal College of Psychiatrists was finding recruitment into specialist psychiatry posts increasingly difficult4,5. In 2012, only 78% of the Core Training year one (CT1) posts in psychiatry were filled; a serious shortfall which was overcome by overseas recruitment up until changes in immigration rules.
The factors that seem to dissuade medical students from taking up psychiatry as a future career may include: stigma, bad prognosis of psychiatric disorders, poor scientific base of psychiatry, ‘bad-mouthing’ from medical colleagues, lack of respect among peers & public, threats of violence from patients and lack of resources1-5. However, there is evidence to suggest that many students’ attitudes towards career choice changed in a positive direction after working in psychiatry due to the perceived ‘job satisfaction’, ‘life-style’, ‘training available’ and ‘multidisciplinary approach’3.
Psychiatry has previously been ranked higher in career choice at the end of students’ clinical year6. To ensure a stable psychiatric workforce for the future, there is an obvious need to motivate current and future cohorts of young doctors to take up psychiatry as a career. Das & Chandrasena (1988) found that attitudes changed positively towards mental health following clinical placement in this specialty7. It is also known that medical students’ attitudes to psychiatry and career intentions can be improved by their experiences of teaching8. Students were found to develop more positive attitudes when encouraged by senior psychiatrists, had direct involvement in patient care, or saw patients respond well to treatment. Improvement in attitudes during the placement was also related to an increased intention to pursue psychiatry as a career.
Previous research into attitude to psychiatry as a specialty and career choice seems to have produced conflicting results and most of it was carried out among medical students. Since career choices in the U.K. are actually made in the first clinical year following graduation, we carried out a survey among a recent cohort of foundation year one (FY1) doctors in the South East England before and after their first clinical year.
Method
Our study sample consisted of all FY1 doctors (n=101) in one region of South East England. They participated in the study at the beginning and then at the end of their first clinical year. We used a 20–item questionnaire devised by Das & Chandrasena(1988) to ascertain their perceptions and attitudes towards psychiatry before they commenced their first clinical placement. The questionnaire was sent to them via their Medical Education Managers (MEMS). It was handed out to the FY1 doctors as part of their induction pack for completion along with a study information sheet.
At the end of their first year of working, the participants were asked to complete an amended version of the questionnaire. This included two additional questions which ascertained whether the doctor had an opportunity to work in a psychiatric post, or had any experience of psychiatry in practice (such as taster days or cases in A&E). These amended questionnaires were sent to the foundation doctors electronically via their MEMS for completion.
The data was collected and entered into a spreadsheet to prepare descriptive statistics. Comparisons for before and after exposure to psychiatry, and between the psychiatry and non-psychiatry groups were made using the chi-square test. As the data was binary, a latent class model was developed using LatentGOLD software9 to explore the associations between different items in the questionnaire. Responses from the questionnaires were coded as: responses which agree with a positive attitude to psychiatry or disagree with a negative attitude were coded as +1; those not sure were coded as 0; and responses which agree with a negative attitude to psychiatry or disagree with a positive attitude were coded as -1.
Results
A 100% (n=101) response rate was obtained for the first set of questionnaires completed at the beginning of the year. However, there was a significant drop in the number of questionnaires completed at the end of the year - a 53.5% response rate (n=54) generally but 61.1% (22 out of 36) for those FY1 doctors who had the opportunity or access to a post in psychiatry within their clinical year.
Initial cohort at beginning of the clinical year vs. those with no exposure to psychiatry at the end
Table 1 shows the group means for each questionnaire item, for the whole cohort at the beginning of the year compared to those with no exposure to psychiatry by the end of the year.
Table 1: All FY1 doctors before training placements started (initial cohort) versus FY1 doctors without a psychiatric post after FY1 training
Before
After
Difference
L
U
p-value
Within medicine, psychiatry has a high status
-0.686
-0.591
0.095
-0.169
0.359
0.476
I may consider pursuing a career in psychiatry in the future
-0.539
-0.136
0.403
0.046
0.760
0.028
Psychiatry is attractive because it is intellectually comprehensive
-0.500
0.273
0.773
0.436
1.000
0.000
Most non-psychiatric medical staff are not critical of psychiatry
-0.431
-0.500
-0.069
-0.442
0.305
0.717
Physicians do not have time to deal with patients emotional problems
-0.294
0.273
0.567
0.142
0.991
0.009
Psychiatrists understand and communicate better than other physicians
-0.127
0.364
0.491
0.090
0.892
0.017
Psychiatrists don't overanalyse human behaviour
0.147
0.364
0.217
-0.200
0.633
0.306
Expressing an interest in psychiatry is not seen as odd
0.157
-0.136
-0.293
-0.727
0.141
0.184
Hospitalised patients are not given too much medication
0.167
0.591
0.424
0.116
0.732
0.007
Psychiatrists don't make less money on average than other physicians
0.255
0.045
0.209
-0.537
0.118
0.208
Psychiatry is a rapidly expanding frontier of medicine
0.363
0.727
0.365
0.033
0.696
0.032
Psychiatric curriculum and training are not too easy
0.520
0.682
0.162
-0.112
0.436
0.243
Psychiatrists are not fuzzy thinkers
0.578
0.818
0.240
-0.082
0.561
0.142
Psychiatrists should have the legal power to treat patients against their will
0.608
0.955
0.347
0.051
0.642
0.022
A placement in psychiatry can change one's negative views of psychiatry
0.618
0.864
0.246
-0.066
0.558
0.121
Psychiatry is scientific and precise
0.627
0.818
0.191
-0.098
0.480
0.194
There is a place for ECT in modern medicine
0.755
0.727
-0.028
-0.239
0.184
0.797
Psychiatric consultations are often helpful
0.853
0.864
0.011
-0.210
0.231
0.924
Entering psychiatry is not a waste of a medical education
0.873
1.000
0.127
-0.048
0.303
0.153
Psychiatrists don't often abuse their legal powers
0.892
1.000
0.108
-0.049
0.264
0.175
Those FY1 trainees who had not worked in psychiatry during the year were significantly more positive (p = < 0.05) for psychiatry’s future, psychiatrist being better at patient communication and not over-medicating their patients. However, they remained significantly less convinced as compared to the whole cohort about psychiatry’s intellectual attraction or taking it up as a future career.
Initial cohort at beginning of the year vs. those with exposure to psychiatry at the end
Table 2 shows the group means for each questionnaire item, for the whole
cohort at the beginning of the year compared to those with exposure to psychiatry at the end of the year.
Table 2: All FY1 doctors before training placements started versus FY1 doctors with a psychiatric post during FY1 training
Before
After
Difference
L
U
p-value
Within medicine, psychiatry has a high status
-0.686
-0.745
-0.058
-0.242
0.125
0.531
I may consider pursuing a career in psychiatry in the future
-0.539
-0.617
-0.078
-0.332
0.177
0.547
Psychiatry is attractive because it is intellectually comprehensive
-0.500
-0.468
0.032
-0.214
0.278
0.798
Most non-psychiatric medical staff are not critical of psychiatry
-0.431
0.106
0.538
0.248
0.827
0.000
Physicians do not have time to deal with patients emotional problems
-0.294
-0.383
-0.089
-0.401
0.224
0.575
Psychiatrists understand and communicate better than other physicians
-0.127
-0.085
0.042
-0.260
0.345
0.783
Psychiatrists don't overanalyse human behaviour
0.147
0.340
0.193
-0.123
0.510
0.229
Expressing an interest in psychiatry is not seen as odd
0.157
0.106
-0.050
-0.378
0.277
0.761
Hospitalised patients are not given too much medication
0.167
0.362
0.195
-0.044
0.434
0.109
Psychiatrists don't make less money on average than other physicians
0.255
0.404
0.149
-0.092
0.391
0.224
Psychiatry is a rapidly expanding frontier of medicine
0.363
0.064
-0.299
-0.569
-0.029
0.030
Psychiatric curriculum and training are not too easy
0.520
0.596
0.076
-0.128
0.281
0.464
Psychiatrists are not fuzzy thinkers
0.578
0.596
0.017
-0.233
0.268
0.892
Psychiatrists should have the legal power to treat patients against their will
0.608
0.532
-0.076
-0.323
0.171
0.545
A placement in psychiatry can change one's negative views of psychiatry
0.618
0.574
-0.043
-0.290
0.203
0.730
Psychiatry is scientific and precise
0.627
0.702
0.075
-0.155
0.304
0.521
There is a place for ECT in modern medicine
0.755
0.511
-0.244
-0.427
-0.061
0.009
Psychiatric consultations are often helpful
0.853
0.745
-0.108
-0.289
0.073
0.239
Entering psychiatry is not a waste of a medical education
0.873
0.808
-0.064
-0.218
0.090
0.412
Psychiatrists don't often abuse their legal powers
0.892
0.766
-0.126
-0.279
0.027
0.105
After a psychiatry placement, significant positive differences (p=<0.05) were observed in their responses to medical staff’s view of psychiatry, future of psychiatry and place of Electro Convulsive Therapy (ECT) in modern medicine. While there was a positive trend in most responses in favour of psychiatry, trainees remained negative about psychiatry’s status, its scientific base, curriculum & training and taking up psychiatry as a future career.
Those exposed to psychiatry vs. those not exposed to psychiatry
Table 3 compares responses between FY1 doctors exposed to psychiatry during the clinical year and those who were not.
Table 3: FY1 doctors who had a psychiatric post versus those who did not have one
Sorted by the size of the difference between the two groups.
t-test
ranksum
Psychiatry
No Psychiatry
Difference
L
U
p-value
p-value
Most non-psychiatric medical staff are not critical of psychiatry
0.106
-0.500
-0.606
-1.000
-0.144
0.011
0.011
Psychiatrists don't make less money on average than other physicians
0.404
0.045
-0.359
-0.694
-0.024
0.036
0.034
Expressing an interest in psychiatry is not seen as odd
0.106
-0.136
-0.243
-0.735
0.249
0.329
0.322
Psychiatrists don't overanalyse human behaviour
0.340
0.364
0.023
-0.421
0.467
0.917
0.907
Psychiatric curriculum and training are not too easy
0.596
0.682
0.086
-0.210
0.382
0.564
0.497
Psychiatry is scientific and precise
0.702
0.818
0.116
-0.187
0.419
0.447
0.777
Psychiatric consultations are often helpful
0.745
0.864
0.119
-0.173
0.411
0.419
0.388
Within medicine, psychiatry has a high status
-0.745
-0.591
0.154
-0.130
0.437
0.283
0.391
Entering psychiatry is not a waste of a medical education
0.808
1.000
0.191
-0.020
0.403
0.075
0.058
There is a place for ECT in modern medicine
0.511
0.727
0.217
-0.117
0.551
0.200
0.192
Psychiatrists are not fuzzy thinkers
0.596
0.818
0.222
-0.114
0.559
0.192
0.190
Hospitalised patients are not given too much medication
0.362
0.591
0.223
-0.139
0.597
0.218
0.192
Psychiatrists don't often abuse their legal powers
0.766
1.000
0.234
-0.005
0.473
0.055
0.040
A placement in psychiatry can change one's negative views of psychiatry
0.574
0.864
0.289
-0.045
0.623
0.088
0.064
Psychiatrists should have the legal power to treat patients against their will
0.532
0.955
0.423
0.097
0.748
0.012
0.011
Psychiatrists understand and communicate better than other physicians
-0.085
0.364
0.449
0.000
0.897
0.050
0.050
I may consider pursuing a career in psychiatry in the future
-0.617
-0.136
0.481
0.084
0.878
0.028
0.017
Physicians do not have time to deal with patients emotional problems
-0.383
0.273
0.656
0.195
1.000
0.006
0.007
Psychiatry is a rapidly expanding frontier of medicine
0.064
0.727
0.663
0.269
1.000
0.001
0.002
Psychiatry is attractive because it is intellectually comprehensive
-0.468
0.273
0.741
0.352
1.000
0.000
0.001
Those exposed to psychiatry agreed more often that non-psychiatric medical staffs were critical of psychiatry compared to the group not exposed to psychiatry. They also had comparatively negative responses for psychiatrists not abusing legal powers and to have the legal power to treat someone against their will. Trainees exposed to psychiatry also felt significantly (p=<0.05) positive towards psychiatry being intellectually comprehensive and adopting it as a career. However, they were less enthusiastic about psychiatrists treating patients against their will and psychiatry being the expanding frontier of medicine.
Discussion
In this study, we have ascertained attitudes of a regional cohort of FY1 doctors towards psychiatry as a specialty and as a career choice. Our findings are similar to previous research carried out among medical students, which found that there were generally negative attitudes towards psychiatry as a specialty and career choice but fairly positive attitudes towards the role of psychiatry in medicine and in society in general1-5,10. Like others, we also found that personal experience of psychiatry placement can improve trainees’ view of psychiatry as a specialty and as a future career 3,11.
It was interesting to find out that after a year in clinical practice but without any experience of psychiatry, trainees’ attitudes towards psychiatry as a specialty had been positive. It is difficult to know the exact reason but we can speculate that this respect for the specialty may have developed when they experienced limitations of the other specialties in medicine and/or perhaps due to the positive professional encounters with psychiatrists at the Accident & Emergency (A&E) or with psychiatric liaison teams during ward consultations. As opposed to previous research11, it was heartening to note that the group with no exposure to psychiatry agreed that non-psychiatric medical staff were not critical of psychiatry; a possible sign of reduced stigma for psychiatry within the medical profession.
Despite exposure to psychiatry, FY1 doctors’ attitudes to psychiatry’s status, scientific base, curriculum & training and career choice remained somewhat negative. Similar results were found by Lyons et al11 when they assessed students’ attitudes towards psychiatry after a clerkship in the specialty. There was a significant decrease in negative & stigmatising views towards mental illness after the clerkship, but no significant improvement in students' interest in psychiatry was detected1. Goldacre et al (2013) also acknowledged mixed outcomes of early experience of working in psychiatry as it might discourage some doctors. While highlighting positive effect of the doctors’ experience of the speciality, they also cited it as a negative factor that influenced some doctors who had previously considered psychiatry as a career3.
Our study has limitations because of having a small sample and being carried out in one small region of the country. It is also worth mentioning that the group exposed to psychiatry may not have had a psychiatry placement as it also included those who had had taster days or experience in A&E. The brevity of these latter exposures cannot give someone a real sense of the specialty. The nature of this and the overall experience needs to be differentiated and the exposure quantified in the future studies. Our study findings also need to be replicated with future cohorts and in other regions for confirmation because FY training programme in the U.K. is relatively recent and placements in psychiatry have evolved4 over the last few years through closer collaboration between different stakeholders in the Foundation Training Programmes.
Hypertension is the most common risk factor for perioperative cardiovascular emergencies. Acute episodes of hypertension may arise due to the aggravation of a pre-existing chronic hypertensive condition or as de novo phenomena1.
Emergency, anaesthesia, intensive care and surgery are among the clinical settings where proper recognition and management of acute hypertensive episodes is of great importance. Many surgical events may induce sympathetic activity, leading to sudden elevations in BP2.
The long term end-organ effects add to patient morbidity and mortality. Ensuring cardiovascular stability and pre-optimization of BP allows safe manipulation of physiology and pharmacology during anaesthesia2. Different medications are available for the management of hypertensive emergencies. The greatest challenge is the acute care setting where the need for proper and sustained control of BP exists.
Definition
Acute severe elevations in BP have several terms. The syndrome characterized by a sudden increase in systolic and diastolic BPs (equal to or greater than 180/120 mmHg) associated with acute end-organ damage that requires immediate management otherwise it might be life-threatening was defined as malignant hypertension3. The international blood pressure control guidelines removed this term and replaced it with hypertensive emergency or crisis4.
Criteria for hypertensive emergencies (crises) include: dissecting aortic aneurysm, acute left ventricular failure with pulmonary oedema, acute myocardial ischemia, eclampsia, acute renal failure, symptomatic microangiopathic haemolytic anemia and hypertensive encephalopathy5.
While they suggest 'hypertensive urgency' for patients with severe hypertension without acute end-organ damage3. The difference between hypertensive emergencies and urgencies depends on the existence of acute organ damage, rather than the absolute level of blood pressure5.
Causes of hypertensive crises
Cessation of antihypertensive medications is one of the main causes. Other common causes are autonomic hyperactivity, collagen-vascular diseases, drug use (stimulants, e.g. amphetamines and cocaine), glomerulonephritis, head trauma, pre-eclampsia and eclampsia, and renovascular hypertension6.
Signs and symptoms of hypertensive crisisinclude severe chest pain, severe headache accompanied by confusion and blurred vision, nausea and vomiting, severe anxiety, shortness of breath, seizures and unresponsiveness.
Pathogenesis
Humoral vasoconstrictors released in the hypertensive crises episodes result in a sudden increase in systemic vascular resistance. Endothelial injury accompanies severe elevations of BP resulting in fibrinoid necrosis of the arterioles with the deposition of platelets and fibrin, and a breakdown of the normal autoregulatory function. The resulting ischemia speeds the further release of vasoactive substances completing a vicious cycle7.
Perioperative hypertension
At least 25% of hypertensive patients who undergo noncardiac surgery develop myocardial ischemia associated with the induction of anaesthesia or during the intraoperative or early post-anaesthesia period8. Previous history of diastolic hypertension greater than 110 mmHg is a common predictor of perioperative hypertension. The level of risk depends on the severity of hypertension9.
Sympathetic activation during the induction of anaesthesia increases the BP by 20 to 30 mmHg and the heart rate by 15 to 20 beats per minute in normotensive individuals8. These responses may be more obvious in patients with untreated hypertension in whom the systolic BP can increase by 90 mmHg and heart rate by 40 beats per minute.
Intraoperative hypertension is associated with acute pain induced sympathetic stimulation besides certain types of surgical procedures like carotid surgery, intrathoracic surgery and abdominal aortic surgery. Paix et al, analysed 70 incidents of intraoperative hypertension and reported that drugs were the precipitating cause (inadvertent vasopressor administration by the anaesthetist or surgeon, intravenous adrenaline with local anaesthetic and failure to deliver a volatile agent or nitrous oxide) in 59% of the cases. Light anaesthesia and excessive surgical stimulation represented 21% of incidents, while equipment related causes (ventilation problems e.g. stuck valve, hypoventilation, soda lime exhaustion and endobronchial intubation) were 13% of incidents. Awareness under general anaesthesia, myocardial infarction and pulmonary oedema represented 7% of incidents10.
In the early postanaesthesia period, hypertension often starts within 10 to 20 minutes after surgery and may persist for 4 hours. Besides pain induced sympathetic stimulation, hypoxia, intravascular volume overload from excessive intraoperative fluid therapy and hypothermia can promote postoperative hypertension. If untreated, patients are at high risk for myocardial ischemia, cerebrovascular accidents and bleeding11. Hypertension might happen 24 to 48 hours postoperative due to fluid mobilisation from the extravascular space, besides cessation of antihypertensive medication in the early postoperative period12.
The absolute level of BP is as important as the rate of increase. For example, patients with chronic hypertension may tolerate systolic BPs (SBP) of 200 mm Hg without developing hypertensive encephalopathy, while pregnant women and children may develop encephalopathy with diastolic BPs of 100 mm Hg13.
Preoperative general considerations for hypertensive patients
During preoperative assessment we have to review associated medical problems such as ischaemic heart disease, cerebrovascular disease and renal failure. This can assess the risk for anaesthesia and so the hypertensive end-organ damage. Some patients with hypertension are asymptomatic and accidentally discovered during preoperative assessment. Incidental hypertension may suggest long standing hypertensive disease1. Idiopathic hypertension comprises about ninety percent of hypertensive patients6.
Management of perioperative hypertension crises
The treatment plan of perioperative hypertension differs from treatment of chronic hypertension. Hypertensive patients undergoing elective surgery are at risk for increased perioperative hypertensive attacks. Postponement of elective surgery is recommended in chronic hypertensive patients if the diastolic BP is ≥110 mm Hg until the BP is controlled14. We have to determine if it is a hypertensive emergency or urgency, besides the underlying causes of the patient’s BP elevation.
The most appropriate medication for management of hypertensive emergency should have a rapid onset of action, a short duration of action, be rapidly titratable, allow for dosage adjustment, have a low incidence of toxicity, be well tolerated and have few contraindications2,15. A parenteral antihypertensive agent is preferred due to rapid onset of action and ease of titration5.
The goal of therapy is to halt the vascular damage and reverse the pathological process, not to normalise the BP. Guidelines by the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High BP for treating hypertensive emergencies include starting intervention with reducing systolic BP by 10 to 15%, up to 25% within the first hour. Followed
by gradual reduction of the absolute BP to 160/110 mmHg over the following two to six hours5,16.
Hypertension that occurs with tracheal intubation, surgical incision and emergence from anaesthesia is best treated with short-acting β-blockers, calcium channel blockers, vasodilators, or angiotensin-converting enzyme inhibitors. Postoperative hypertension is best managed by correction of precipitating factors (pain, hypothermia, hypervolemia, hypoxia and hypercarbia)17.
Unintentional hypotension and associated organ hypoperfusion happens with aggressive attempts to lower BP since the homeostatic mechanisms depend on higher blood pressure for adequate organ perfusion. While inadequate lowering of BP may result in increased morbidity and mortality. However, the alteration between overshooting BP and severe hypotensive states and using vasopressors to get the normotensive levels may damage end-organs and the vasculature - precise control of BP in a hypertensive crisis is a challenge18.
Since chronic hypertension shifts cerebral and renal perfusion autoregulation to a higher level, the brain and kidneys are prone to hypoperfusion with rapid decrease in blood pressure. So control of blood pressure to baseline levels should take 24 to 48 hours5.
In cases of aortic dissection, the systolic BP should be reduced to less than 120 mmHg within twenty minutes. In ischemic stroke, BP must be lowered to less than 185/110 before administration of thrombolytic therapy19. Gentle volume expansion with intravenous saline solution will maintain organ perfusion and prevent sudden drop in BP with using antihypertensive medications5. Preoperative hypertension is a hypertensive urgency, not an emergency, as it rarely involves end-organ damage with adequate time to reduce the BP18. Longer acting oral medications such as Labetalol and Clonidine may be more suitable 20.
Common antihypertensive medications used in hypertensive crises
Sodium Nitroprusside is a combined venous and arterial vasodilator which decreases both afterload and preload. The onset of action is within seconds and duration of action lasts for one to two minutes, so continuous BP measurement is recommended. If the infusion is stopped, the BP rises immediately and returns to the pretreatment level within one to ten minutes. Prolonged intravenous administration with infusion rates more than 2 mcg/Kg/min may result in cyanide poisoning. Thus, infusion rates greater than 10 mcg/Kg/min should not be continued for prolonged periods21.
Labetalol, an alpha- and beta-blocking agent has proven to be beneficial to treat patients with hypertensive emergencies. Labetalol is preferred in patients with acute dissection and patients with end-stage renal disease. The onset of action is five minutes and lasts for four to six hours. The rapid fall in BP results from a decrease in peripheral vascular resistance and a slight fall in cardiac output22. A reasonable administration protocol is to give an initial intravenous bolus of Labetalol 0.25 mg/Kg, followed by boluses (0.5 mg/Kg) every 15 minutes until BP control or a total dose 3.25 mg/Kg. Once an adequate BP level is achieved, we can start oral therapy with gradual weaning from parenteral agents22.
Fenoldopam, a peripheral dopamine-1-receptor agonist, induces peripheral vasodilation; administered by intravenous infusion. Duration of action from 30 to 60 minutes. Gradual decrease in blood pressure to pretreatment values occurs without rebound once the infusion is stopped because of short elimination half-life. A starting dose of 0.1 μg/kg/min, titrated by 0.05 to 0.1 μg/kg/min up to 1.6 μg/kg/min. Fenoldopam provides rapid decline in blood pressure with reflex tachycardia so beware in patients at risk of myocardial ischemia23.
Clevidipine, a dihydropyridine calcium channel blocker, produces rapid and precise BP reduction. It has a short half-life of about one to two minutes with potent arterial vasodilation without affecting venous capacitance, myocardial contractility or causing reflex tachycardia24. Start intravenous infusion of Clevidipine at 1-2 mg/h; titrate the dose at short intervals (90s) initially by doubling the dose. Systolic pressure decreases by at least 15% from baseline within 6 minutes post-infusion24. A 1-2 mg/h increase in infusion rate produces an additional 2-4 mmHg reduction in SBP14. Clevidipine is an ideal agent to manage acute severe hypertension moreover safe for patients with hepatic and renal dysfunction2.
Rational approach to the management of hypertensive crises
Neurological emergencies
Subarachnoid haemorrhage, acute intracerebral haemorrhage, hypertensive encephalopathy, and acute ischemic stroke require rapid BP reduction. In hypertensive encephalopathy, reduce the mean arterial pressure (MAP) 25% over 8 hours. Labetalol, Nicardipine and Esmolol are the preferred medications; Nitroprusside and Hydralazine should be avoided25.
For acute ischemic stroke, the preferred medications are Labetalol and Nicardipine. The target BP is < 185/110 mm Hg especially if the patient is receiving fibrinolysis25.
In acute intracerebral haemorrhage, Labetalol, Nicardipine and Esmolol are preferred; avoid Nitroprusside and Hydralazine. If signs of increased intracranial pressure (ICP) exist, keep SBP < 180 mm Hg, while maintain SBP < 160 mm Hg in patients without increased ICP for the first 24 hours after onset of symptoms25. Early intensive BP control is recommended to reduce hematoma growth26,27.
In subarachnoid haemorrhage, Nicardipine, Labetalol and Esmolol are also the preferred agents; while Nitroprusside and Hydralazine should be avoided. Maintain the SBP < 160 mm Hg until the aneurysm is treated or cerebral vasospasm happens25.
Cardiovascular emergencies
Rapid BP reduction is also indicated in cardiovascular emergencies such as aortic dissection, acute heart failure, and acute coronary syndrome. Labetalol, Nicardipine, Nitroprusside (with beta-blocker), Esmolol, and Morphine are preferred in aortic dissection. Beta-blockers should be avoided if there is aortic valvular regurgitation or suspected cardiac tamponade. Keep the SBP < 110 mmHg unless signs of end-organ hypoperfusion exists28.
In acute coronary syndrome if the BP is >160/100 mm Hg, Nitroglycerin and beta blockers are used to lower the BP by 20-30% of baseline but, thrombolytics are avoided if the BP is >185/100 mm Hg28. In acute heart failure use intravenous Nitroglycerin and intravenous Enalaprilat. Give vasodilators (besides diuretics) when SBP is 140 mm Hg28.
Cocaine toxicity/Pheochromocytoma
Diazepam, Phentolamine and Nitroglycerin/Nitroprusside are the preferred drugs. In cocaine toxicity, tachycardia and hypertension rarely require specific treatment. Phentolamine is proper for cocaine-associated acute coronary syndromes.In pheochromocytoma, beta blockers can be added after alpha blockade for BP control29.
Pre-eclampsia/eclampsia
The proper medications are Hydralazine, Nifedipine and Labetalol however avoid Nitroprusside, Esmolol and angiotensin-converting enzyme inhibitors. The BP should be <160/110 mm Hg in the antepartum period and during delivery. The BP should be maintained below 150/100 mm Hg if the platelet count is less than 100,000 cells mm3. Intravenous Magnesium Sulphate should also be used to prevent seizures30.
Perioperative hypertension
Nitroprusside, Nitroglycerin and Esmolol are used. Target the perioperative BP to within 20% of the patient's baseline pressure. Perioperative beta blockers are best to use in patients undergoing vascular procedures or at risk of cardiac complications28.
CONCLUSION
Perioperative hypertension commonly occurs in patients undergoing surgery. The permitted value is based on the patient’s preoperative BP. It is approximately 10% above that baseline however more reduction in BP may be warranted for patients at high risk of bleeding or with severe cardiac problems. Accurate adjustment of treatment and monitoring of patient’s response to therapy are essential to safe and effective management of perioperative hypertension.
Bed bugs belong to the family Cimicidae and there are two species involved in the modern resurgence; the Common bed bug, Cimex lectularius and the Tropical bed bug, Cimex hemipterus. They are wingless insects with an oval-flat shape that allows them to hide in narrow cracks and crevices. The adults are dark brown, 4-5mm long, becoming to around 10mm when fully blood-engorged. There are five smaller juvenile stages (nymphs) that are similar in appearance, although lighter in colour. All nymphs require a blood meal to moult to the next stage, and both adults also bloodfeed for nutrition, and egg development in the case of the female. Bed bugs are solely haematophagous ectoparasites. After feeding they return to a harbourage and do not remain on the host. The main hosts are humans, but pets, bats, and birds may act as secondary hosts.
Epidemiology
In the past, bed bugs were particularly an affliction of the poor. However, in the early part of the modern resurgence it was the tourist areas and the hospitality sector that were initially impacted.1-3 Today, bed bugs have conquered quite diverse locations, ranging from hospitals, hotels and homes, to trains, cruise ships, and even airplanes. Most commonly, bed bugs travel in comfort as stowaways in luggage, although they can be transferred via furnishing and other belongings, as well by spreading to adjoining properties. Unfortunately, exact figures on the occurrence of bed bugs are unknown, as there are no mandatory reporting requirements. Additionally, due to the stigma associated with bed bugs, many infestations are simply not reported.
During the day, the largely nocturnal bed bugs will crawl deep into crevices of bed frames and mattresses (Fig.1), or behind wallpaper, and floor moldings. Here they tend to lay their eggs, often several hundred during the female lifetime. Live bed bugs, shed nymphal skins, and dark excrement spots indicate an active infestation. At night they are attracted by carbon dioxide, heat and other host odours to a victim, from which they may take a blood meal every 3-5 days. The adult bugs can survive long periods of starvation, up to five months at 22oC or even longer at cooler temperatures. When a host is found, they insert their mouthparts into the skin, blood feeding for 5-10 minutes. When bed bugs are in large numbers, often lines of bites occur on the unfortunate victim and this sign is almost a sure indication of the presence of the insect. The bites tend to occur along the arms and legs, down the back and across the shoulders.4,5
There has been long speculation whether bed bugs can transmit diseases, and in fact more than 40 different pathogens have been implicated. This has included Hepatitis B and C viruses, Human Immunodeficiency Virus (HIV), and Coxiella burnetii (Q fever). Recently, research has indicated that bed bugs are capable of transmitting the agent of Chagas Disease, Trypanosoma cruzi,in the laboratory. However, to date there is not one piece of evidence that bed bugs have transmitted any pathogen to humans.4,6
Clinical Features
During the act of feeding, saliva is injected which contains a variety of anticoagulants as well as other proteins whose function has yet to be determined. Contrary to popular belief, there is no evidence that bed bugs inject an anaesthetic. One protein, Nitrophorin, is involved in the transport of nitric oxide into the wound. This results in local vasodilation that increases blood supply to the feeding insect. The same protein can also induce a sensitivity to the bite.6
The diagnosis of Cimicosis is via the clinical appearance of the bite reaction and confirmation of an actual bed bug infestation (Table 1).3,5 The most commonly affected body parts are those that are left uncovered during sleep (Fig. 2,3,4), notably the arms, shoulders and legs. In young children, the face and even the eyelids can be bitten. Rarely, however, armpits are bitten, which are often preferred by other insects and ticks (Table 2).
Table 1. Bed bug infestation
Bites on the body
Wheals, 4-6cm in diameter, lines of bites
Any exposed body part
Often intense itching
Occasional central haemorrhage
Bed Sheet, mattress (clothing)
Small blood spots
Droppings (black dots)
Shed nymphal skins
Eggs, small (~1mm in length), white, oblong, glued to the substrate
Space
Pungent smell (mostly commonly noticed when an insect is squashed, or during the control program)
Table 2. Differential diagnosis of epidermatozoonoses
Bite preference
Pattern
Itching
Notes
Bed Bugs
Any exposed parts of the body, arms, legs, face, torso
In small infestations, bites will be random. In larger infestations, bite can occur in lines along the limbs and across the shoulder. Large wheals (up to 6cm across) may form, even some 14 days after the bite
Often intense, especially in the morning, but can be variable between individuals
Often associated with travel or used furniture
Fleas
Exposed parts of the body, especially the legs
Random, usually not grouped or in lines
During the day
Usually associated with pets
Mosquitoes
Exposed skin, particularly legs and arms
Random
Variable between individuals
Most commonly outdoors
Ticks
Potentially anywhere on the body
Erythema migrans with Lyme disease. Localised macules/papules at the bite site may occur
Low / no
Those who work or recreate in native forests are at greatest risk.
Itch Mites (Scabies, Sarcoptes scabiei)
Forearms, inter digital, genital area
Skin rashes, subcutaneous courses
At night
Most common in the elderly and infirmed
Harvest mites (Trombidiosis)
Skin surfaces under tight clothing
Red macules and wheals
Severe itching
Often occurs in gardens or meadows, most active during summer and autumn
Cheyletiellosis
Arms and trunk, contact points with pets
Polymorphic rash
Variable
Tends to be associated with pets
Bird mites
All over
Macular rash
Variable itching
Most commonly in homes as a result of birds roosting in roof cavities
Head Lice (Pediculosis)
In the hair of the head
Bar-shaped scratch effects with lichenification and hyper-pigmentation (Vagabond’s disease)
Night and day, generally mild itching
Most common in school aged children
Spiders, e.g. long-legged sac spiders
Arms, face
Necrotic lesion at bite site
Immediate severe pain, no itching
Uncommon
Figure 1: Typical appearance of bed bugs
Figure 2: Bites on the back, note the lines of bites common in moderate to large infestations
Figure 3: Bed bug bites on the arm, typical formation
Figure 4: Bed bug bites on the torso and arm
Figure 5: Bullae due to bed bug bites
Figure 6: Bed bugs, their droppings and eggs underneath a mattress
The degree of the bite reaction often depends on the level of prior exposure. With low level sensitization, individuals may develop a 1-2 cm wheal, with a small central haemorrhagic point. This haemorrhagic point can be recognized easily by diascopy. In contrast, a highly sensitized person will react immediately and may develop a wheal up to 15cm across (6 inches). If many bed bugs are present, an urticarial rash may develop as a result of the large number of bites and subsequent trauma to the area from scratching. On rare occasions, vesicles and bullae (Fig. 5) may form on the arms and legs. In the course of Cimicosis, papules that are extremely itchy may develop and can persist for several days to weeks. Due to the strong pruritus eczematous lesions, bacterial infections may occur, although this is extremely rare. There are case reports of systemic reactions such as anaphylaxis and asthma, although these are uncommon.
Through repeated exposure, some individuals may develop a tolerance to the bites. The clinical symptoms are then largely inapparent with small punctures at the bite site. Small blood spots are then the only clues that an infestation may be present.
Differential Diagnosis
Since reactions to stings and bites of various arthropods are non-specific, bed bug bites are commonly misdiagnosed. Single bites, notably that of other insects such as mosquitoes, fleas and biting midges may appear very similar morphologically (Table 2).
Consideration of where the bites are on the body can assist in the differential diagnosis. For bed bugs, lines of bites are very common in moderate to large infestations and this clinical picture is virtually unique amongst blood sucking arthropods. For the most part, the identification of the actual pest is required to confirm the diagnosis. Histologically, bed bug bites resemble perivascular eosinophilic infiltrates through the superficial and deep dermis, with minimal spongiosis.
Other possible diagnostic confounders can be various allergic reactions and other medical conditions such as urticaria, chickenpox, prurigo subacuta, and erythema multiforme.7,8 These do not show a central haemorrhagic point in the lesion which allows a correct diagnosis. However, in young children the diagnosis can sometimes be difficult.
Treatment
The treatment of Cimicosis is symptomatic. Local lesions can be treated with antipruritics e.g. Polidocanol 2-4% in Lotio alba (aqueous lotion) and topical antiseptic. Spirit of menthol may also be helpful. Local treatment with antihistamines is controversial. In severe reactions topical glucocorticoids such as Betamethasone may be required. In severe itching, the use of oral antihistamines is recommended. With infected bites, antibiotic therapy may be required. Uncomplicated bed bug bites tend to stop itching within 1-2 weeks, although temporary scarring from the bite may remain for several months.
Management
Treatment of patients with bed bug bites ultimately comes down to removing the source of the irritant, namely the eradication of the active infestation. Bed bugs have a typical pungent odor. This can be used to detect bed bugs through specially trained sniffer dogs that can rapidly locate the insects.9 Due to insecticide resistance, bed bugs are very difficult to control with traditional insecticides alone, and non-chemical means of eradication must be employed to reduce the overall insect biomass. Bed bug control should be undertaken by professionals trained in bed bug management, and the process may take some weeks to achieve.
Prevention
When travelling (1) always inspect the bed and surrounds for bed bugs hiding beneath the mattress and/or in seams of the bedding. Also, look for blood stains or small black dots (Figure 6, Table 1). (2) If present, request another room. (3) Always keep your luggage on the desktop or the luggage rack. A good preventative is to seal luggage in plastic or garbage bags during travelling, even when in transit. (4) When returning home, all clothing should be washed in at temperatures exceeding 60°C or frozen for one week with delicate fabrics. If there is no choice, then repellents containing N, N-Diethyl-meta-toluamide (DEET) should reduce the biting rate, but will not completely prevent all bed bug bites.10,11
Bed bugs can enter homes via an array of additional ways, particularly from objects bought second hand at flea markets or thrift stores, for example wooden frames, vintage clothes, furniture and the like. These should be heat-treated for a minimum of 10-20 minutes to kill bugs and their eggs.
Medical professional terminology is used to communicate with each other, allied professions and differentiates professionals from patients1. As a tradition, it has perhaps evolved into a language of its own with a vocabulary of terms used as expressions, designations or symbols such as ‘Patient’, ‘Ward Round’ and ‘Registrar’. This ‘language’ is not restricted to use by doctors or nurses - it is used among other professionals working in healthcare, e.g. medical coders and medico-legal assistants.
The National Health Service (NHS) in the U.K. has seen many changes in the last few decades. From within these changes, an interesting trend to change or alter the use of professional terminology, often without consultation with directly affected professionals or patients, has emerged. With new or changed roles, multidisciplinary teams have been observed to alter titles, even borrowing specific terms ascribed to doctors such as “consultant,” “practitioner” and “clinical lead”2,3. On the other hand, Modernising Medical Careers initiative4 has also led to changes in doctors’ titles reflecting their experience levels, which have been reported to be unclear to patients and fellow professionals5.
Medical professional terms can be traced back to Hippocratic writings and their development is a fascinating study for language scholars1. Psychiatric terminology is particularly interesting, as it has evolved through scientific convention while absorbing relevant legal, ethical and political trends along the way. Superficially, it may appear pedantic to quibble over terminology, but the power of language and its significance in clinical encounters is vital for high quality clinical care2,6. Since medical professional terminology is an established vehicle for meaningful communication, undue changes in its use can create inaccurate images and misunderstandings, leading to risks for professional identity. There is also evidence to suggest that such wholesale changes have been misleading7 and a source of inter-professional tension.
Understanding of a professional’s qualifications and experience is crucial for patient autonomy and for them to be able to give informed consent. We carried out a survey among foremost stakeholders of medical professional terminology, patients and doctors, within a psychiatric service to ascertain their attitudes to the changes they have experienced in recent years.
Method:
We gave out a self-report questionnaire to all adult psychiatric patients seen at a psychiatric service in the South East (U.K.) in a typical week and to all working psychiatrists/doctors. The questionnaire was developed after a review of the relevant literature and refined following feedback from a pilot project. The questionnaire contained demographic details and questions regarding attitudes towards medical professional terms for patient and professional identity, processes and working environments. The questions were mostly a “single best of four options” style, with one question involving a “yes” or “no” answer.
The datacollected was analysed by using SPSS statistical package8. Descriptive statistics were used to summarize the characteristics of the study population. The two sub-samples (patients & doctors) were compared with each other regarding different variables by using a t-test, which highlighted the absolute and relative differences among those.
Results:
196 subjects were approached to participate. 187 subjects (patients = 92, doctors = 95) participated, which represents a response rate of 95%.
Male to female ratio was roughly equal in the sample but there were more females in the medical group (56%) as compared to the patient (46%) group. Among responders, those over 40 years of age were more prevalent in the patient group (60% vs. 39%) compared to the medical group.
As shown in the Table 1, patients’ and doctors’ attitudes overwhelmingly leaned towards a patient being called a “patient” (as opposed to “client”, “service user” or “customer”); understanding “clinician” as a doctor (as compared to being a nurse, social worker or psychotherapist), and believing psychiatrist to be a “consultant” (preferred to nurse practitioner, psychologist or social worker).
Table1: Patients’ & doctors’ attitudes to medical professional terms = “patient”, “clinician” and “consultant”
What do you prefer to be called?
Doctors (%)
Patients (%)
Client
16 (17)
13 (14)
Patient
68 (72)
65 (71)
Service user
10 (11)
11 (12)
Customer
1 (1)
3 (3)
Don’t know
0
0
Total
95
92
Chi2 1.378, p = 0.710
Which of these is a clinician?
Doctors (%)
Patients (%)
Nurse
14 (15)
14 15)
Social worker
4 (4)
2 (2)
Doctor
56 (59)
70 (76)
Psychotherapist
7 (7)
6 (6)
Don’t know
14 (15)
0 (0)
Total
95
92
Chi2 16.3, p<0.05
Which of these is a consultant?
Doctors (%)
Patients (%)
Psychiatrist
71 (75)
68 (74)
Psychologist
3 (3)
6 (7)
Social worker
10 (11)
10 (11)
Nurse practitioner
3 (3)
8 (9)
Don’t know
8 (8)
0 (0)
Total
95
92
Chi2 11.3, p<0.05
Patients and doctors seemed to prefer (>70%) calling the person who provides the patient support in the community as “care-coordinator” or “key worker”.
It is worth noting that “key worker” is the main person looking after the patient admitted to hospital and “care-coordinator” has the same role when they are back in the community. Similarly, the majority of the patients deemed the terms “Acute ward” and “PICU” (psychiatric intensive care unit) appropriate for a psychiatric ward.
There was strong evidence to suggest that both patients and doctors were confused as to what a ‘medication review’ was; as approximately 35% of them thought it was a “nursing handover” and the rest were divided whether it was a “pharmacist meeting” or an “assessment”. See Table 2.
This is understandable because the patients are used to an “Out Patient Appointment/Review” where a psychiatrist reviews patients in a holistic manner, which includes prescribing and adjusting their medications. Similar confusion prevailed regarding what has replaced the term “ward round”, as both groups were universally divided among choices offered as “MDM” (multidisciplinary meeting), “Assessment”, “CPA” (Care Programme Approach) and “Review”.
Table 2: Patients’ & doctors’ attitudes to what a “ward round” and “medication review” means?
Which of these means a ward round?
Doctors (%)
Patients (%)
Assessment
26 (27)
34 (37)
MDM
18 (19)
15 (16)
Review
34 (36)
29 (32)
CPA
16 (17)
14 (15)
Don’t know
1 (1)
0 (0)
Total
95
92
Chi2 2.82, p = 0.588
Which of these is a medication review?
Doctors (%)
Patients (%)
OPD
19 (20)
11 (12)
Assessment
25 (26)
34 (37)
Pharmacist meeting
34 (36)
31 (34)
Nursing handover
14 (15)
12 (13)
Don’t know
3 (3)
4 (4)
Total
95
92
Chi2 3.89, p = 0.421
Both patients and doctors were clear (84% vs. 69%) that they expected to see a doctor when they attended a “clinic”. However, both groups were approximately equally divided between their preferences for what a psychiatry trainee should be called; “SHO” (37%) or “Psychiatric trainee” (36-40%). There was also a higher preference (approx. 50% vs. 30%) for the doctor a grade below consultant to be called a “Senior Registrar”.
Patients and doctors were equivocal in their response that they have never been consulted about medical professional terminology.
Fig. 1 Has anyone consulted you about these terms?
Discussion:
In a survey of attitudes to the use of medical professional terms among patients and doctors in a psychiatric service, we have found a significant preference for the older and established medical terms as compared to the newer terms such as MDM, CT trainee, Specialty Trainee, etc.
While replicating findings of other studies3,7, we also found that no single term was chosen by 100% of participants in either group, showing confusion surrounding most psychiatric terms. This lack of consensus and confusion can be explained by the fact that no participant had ever been consulted about the changes or new nomenclature.
Limitations to this study should be taken into account before generalising the results. The patients’ group is older than the doctors’ group, which could skew the results due to age related bias in favour of familiarity and against change9. In a questionnaire about preference and understanding, participants may intuitively prefer the easiest to understand terms and ignore the subtle difference between other styles. Possibility of bias may have been introduced by some of those giving out questionnaires being doctors
Our sample was drawn only from a psychiatric service, which may restrict the implications of our findings to mental health.
Furthermore, involvement of other professionals and carers working in the psychiatric service would have been useful to expand the scope of this study.
Inconsistency regarding doctors’ titles, unleashed by the Modernising Medical Careers (2008) initiative, has resulted in patients considering trainees as medical students5, not recognising ‘Foundation Year 1 Trainees’ as qualified doctors and being unable to rank doctors below consultant level3. Our findings have highlighted the uncertainty regarding qualifications and seniority of doctors – this can erode patients’ confidence in their doctors’ abilities, compromise therapeutic relationship10, especially in psychiatry, and result in poor treatment compliance. Medical students may also find themselves mistaken for doctors, and feel daunted by future job progression where training structures and status are unclear.
Title changes introduced by local management or Department of Health (DoH), without consultation with stakeholders, have the potential to create inter-professional tensions and devalue the myriad skills offered by healthcare workers other than doctors. This could also be damaging to their morale and the confidence instilled in patients. It is interesting to note, however, that titles that do not give the impression of status and experience, such as “trainee”, tend not to be adopted by non-doctor members of the multidisciplinary team3. On the other hand, in a profession steeped in tradition, there will be doctors who see other professionals’ adoption of their respect-garnering and previously uncontested titles as a threat to the status of the medical profession6. Previous studies have shown that terminology has a significant effect on the confidence and self-view of doctors5 and at a time where a multitude of issues has led to an efflux of U.K. junior doctors to other countries, and a vote for industrial action, re-examining a seemingly benign issue involving titles and terminology could have a positive impact.
Patients’ attitudes to development of surgical skills by surgical nurses show that they would like to be informed if the person doing a procedure is not a doctor7.
The roles of a number of professionals involved in an individual’s healthcare can be confusing and the possibility of mistaken identity could be considered misleading6, unethical, and even fraudulent. Introducing confusion by appropriating titles associated with doctors could be damaging to patients’ trust, and is inappropriate in a health service increasingly driven towards patient choice. The challenge lies in how to keep the terminology consistent and used in the best-understood contexts.
Commissioners and managers may instead evaluate the implications of changing professional terms by making sure that all stakeholders are consulted beforehand. Perhaps the pressing source of inconsistency in staff job titles could also be rectified by a broader scale study to find national, multidisciplinary and patient preferences, and taking simple measures such as standardising staff name badges.
Our study has highlighted once again how the landscape of nomenclature in psychiatry/medicine is pitted with inconsistency. While language naturally evolves with time and it may be understandable to see increasing application of business models & terminology in the NHS9, medical professional terms have been determined contextually over the years with significant implications for patient management and safety. Therefore, it is important to question how changes in terminology affect patients, whether it occurs by gradual culture change or due to new initiatives. It would benefit patient care if medical and psychiatric professional language could be standardised and protected from changes, which can lead to colleagues and patients being misled. DoH, Commissioners and Trust/Hospital management must recognise that changing terminology can have a significant impact and that serious discussion of such changes is important for reasons far beyond pedantry. For inter-professional communication a formalised consensus on titles would be beneficial for transparency, trust, patient safety and reducing staff stress levels.
Striae distensae, or stretch marks, are linear scars in the dermis which arise from rapid stretching of the skin over weakened connective tissue. It is a common skin condition that rarely causes any significant medical problems but is often a significant source of distress to those affected. Striae distensae were described as a clinical entity hundreds of years ago, and the first histological descriptions appeared in the medical literature in 1889.1 With a high incidence and unsatisfactory treatments, stretch marks remain an important target of research for an optimum consensus of treatment. These appear initially as red, and later, as white lines on the skin, representing scars of the dermis, and are characterized by linear bundles of collagen lying parallel to the surface of the skin, as well as eventual loss of collagen and elastin. The estimated prevalence of striae distensae range from 50 to 80%.2,3 The anatomical sites affected vary, with areas commonly affected including the abdomen, breasts, thighs and buttocks.4 The three maturation stages of striae include the acute stage (striae rubra) characterized by raised, erythematous striae, the sub-acute stage characterized by purpuric striae, and the chronic stage (striae alba), characterized by white or hypo-pigmented, atrophied striae.5 Although stretch marks are only harmful in extreme cases, even mild stretch marks can cause distress to the bearer6 (Table 1).
Table 1: Histological comparisons between striae rubrae and striae albae
Epidermis
Oedema Increased melanocytes
Epidermal atrophy Loss of rete ridges Decreased melanocytes
Papillary dermis
Dilatation of blood vessels
No vascular reaction
Reticular dermis
Structural alteration of collagen fibres Reduced and reorganized elastic fibres Fine elastic fibres in dermis
Densely packed collagen parallel to skin surface. Thick elastic fibres in dermis
Inflammatory cells
Lymphocytes and fibroblasts
Eosinophills
Aetiology
Striae may result from a number of causes, including, but not limited to, rapid changes in weight, adolescent growth spurts, corticosteroid use or Cushing Syndrome, and generally appear on the buttocks, thighs, knees, calves, or lumbosacral area.7 In addition, approximately 90% of all pregnant women develop stretch marks either on their breasts and/or abdomen by the third trimester.8 Genetic predisposition is also presumed, since striae distensae have been reported in monozygotic twins.9,10 There is decreased expression of collagen and fibronectin genes in affected tissue.11 The role of genetic factors is further emphasised by the fact that they are common in inherited defects of connective tissue, as in Marfan’s syndrome.12,13 Obesity and rapid increase or decrease in weight have been shown to be associated with the development of SD.14 Young male weight lifters develop striae on their shoulders.15 Striae distensae also occurs in cachetic states, such as tuberculosis, typhoid and after intense slimming diets.16 Rare etiologies include human immunodeficiency virus positive patients receiving the protease inhibitor indinavir and chronic liver disease.13,15 A case of idiopathic striae was also reported.17
Rosenthal18 proposed four aetiological mechanisms of striae formation: insufficient development of tegument, including elastic properties deficiency; rapid stretching of the skin; endocrinal changes; and other causes, possibly toxic.
Pathogenesis
The pathogenesis of striae is unknown but probably relates to changes in the components of extracellular matrix, including fibrillin, elastin and collagen.19 There has been emphasis on the effects of skin stretching in the pathogenesis of striae because the lesions are perpendicular to the direction of skin tension.20 A possible role of glucocorticoids in the pathogenesis of striae has been suggested because of an increase in the levels of steroid hormones and other metabolites found in patients exhibiting striae.21 There are studies suggesting the role of fibroblasts in the pathogenesis of striae. Compared to normal fibroblasts, expression of fibronectin and both type I and type III procollagen were found to be significantly reduced in fibroblasts from striae, suggesting that there exists a fundamental aberration of fibroblast metabolism in striae distensae.22
Pathological aspects
The earliest pathological changes are subclinical to be detected by electron microscopy only. These changes include mast cell degranulation and the presence of activated macrophages in association with mid-dermal elastolysis.23 When the lesions become become clinically visible, collagen bundles start showing structural alterations, fibroblasts become prominent, and mast cells are absent.23 On light microscopic examination, Inflammatory changes are conspicuous in the early stage, with dermal oedema and perivascular lymphocytic cuffing.24 In later stages, there is epidermal atrophy, loss of rete ridges and other appendages including hair follicles are absent.25
Evaluation of striae distensae
Approaches to evaluating SD severity visually include the Davey 26 and Atwal scores,27 although these have not been validated specifically for SD. An objective evaluation of SD may be carried out using skin topography, imaging devices including three-dimensional (3D) cameras, reflectance confocal microscopy and epiluminescence colorimetry.28,29,30
Table 2: Visual scoring systems for the assessment of striae distensae
Davey method
Used for evaluating striae rubrae and albae. Divide the abdomen into quadrants using midline vertical and horizontal lines. Each quadrant given a score (0 no SD; 1 moderate number of SD; 2 many SD). Score given out of 8.
Atwal score
Used for evaluating striae rubrae and albae. Six sites chosen (abdomen, hips, breasts, thigh/buttocks). Each site given a maximum score of six. Total score out of 24. Score 0–3 for the presence of striae (0 no SD; 1 < 5 SD; 2 5–10 SD; 3 > 10 SD). Score 0–3 for the presence of erythema (0 no erythema; 1 light red/pink; 2 dark red; 3 purple).
Management
Striae distensae (striae alba) is a very challenging cosmetic problem for dermatologists to treat. Various modalities of treatment have been tried. Although therapeutic strategies are numerous, there is no treatment which consistently improves the appearance of striae and is safe for all skin types.31 Weight loss by diet alone or a combination of diet and exercise do not change the degree of striae distensae.32
Topical treatments
Topical tretinoin (0.1%) ameliorates striae and the improvement may persist for almost a year after discontinuation of therapy.33 More recently, tretinoin has been shown to improve the clinical appearance of stretch marks during the active stage (striae rubra), although with not much effect during the mature stage (striae alba).34 Some of the studies have proven the inefficacy of the vitamin A derivative in the treatment of SD, but most of the patients included in these early studies presented with old lesions that had evolved into whitish atrophic scars.35 A study comparing topical 20% glycolic acid and 0.05% tretinoin versus 20% glycolic acid and 10% L-ascorbic acid, found that both regimens improved the appearance of striae alba.36
Hydrant Creams: 1) Trofolastin (a cream containing Centella asiatica extract, vitamin E, and collagen-elastin hydrolysates). The exact mechanism of action was identified as the stimulation of fibroblastic activity 37 and an antagonistic effect against glucocorticoids.38 2) Verum (a cream containing vitamin E, panthenol, hyaluronic acid, elastin and menthol). The results suggest that the product may show the benefit of massage alone.39 3) Alphastria (a cream composed of hyaluronic acid, allantoin, vitamin A, vitamin E, and dexpanthenol). Only one study was conducted, which concluded that the product markedly lowered the incidence of stretch mark development after pregnancy.40
Glycolic acid (GA): The exact mechanism of action of GA in the management of striae distensae is still unknown because, although GA is reported to stimulate collagen production by fibroblasts and to increase their proliferation in vivo and in vitro, which may be useful for the treatment of stretch marks.41,42 A study comparing topical 20% glycolic acid and 0.05% tretinoin versus 20% glycolic acid and 10% L-ascorbic acid, found that both regimens improved the appearance of striae alba.43
Trichloroacetic acid (TCA; 10–35%): It has been used for many years as a treatment option for striae distensae and is repeated at monthly intervals with reported improvement in texture and color of marks.44
Other topical products: Several oils have been used in the prevention of SD. A non-randomized, comparative study investigated the effect of almond oil in the prevention of SD in which they noted significant differences in the frequency of SD between the groups (almond oil and massage 20%, almond oil alone 38.8%, control 41.2%).45
Overall, there is limited evidence for the efficacy of topical therapy for the treatment of SD.
Microdermabrasion
Microdermabrasion may improve many skin problems including acne scars, skin texture irregularities, mottled pigmentation and fine wrinkles. Karimipour et al reported that microdermabrasion induces epidermal signal transduction pathways associated with remodelling of the dermal matrix.46 However, studies documenting the efficacy of rnicrodermabrasion in treatment of striae are lacking. Published in 1999, a book on microdermasion written by a French dermatologist, Francois Mahuzier, and translated to English, has a chapter "Microdermabrasion of stretch marks”.47 The author states that 10-20 sessions of microdermabrasion at an interval of not less than 1 month, each session resulting in bleeding points, provide satisfactory results. The author concludes that, "microdermabrasion is the only effective treatment of stretch marks today."
Lasers
Lasers have recently become a popular therapeutic alternative to ameliorate and improve the appearance of stretch marks. Most commonly used lasers used include pulsed-dye laser (PDL), short- pulse carbon dioxide and erbium-substituted yttrium aluminium garnet (YAG), neodymium- doped YAG (Nd:YAG), diode, and Fraxel.
Pulsed dye laser: The dilated blood vessels render the striae rubrae a good candidate for PDL.48 The 585- nm pulsed dye laser has a moderate beneficial effect in the treatment of striae rubra.49 To evaluate the effectiveness of the 585-nm flashlamp-pumped pulse dye laser in treating cutaneous striae, 39 striae were treated with four treatment protocols.50 Subjectively, striae appeared to return toward the appearance of normal skin with all protocols. Objectively, shadow profilometry revealed that all treatment protocols reduced skin shadowing in striae. Laser treatment of SD should be avoided or used with great caution in darker skin types (IV–VI), because of the possibility of pigmentary alterations after treatment.51
Excimer laser: Studies have shown temporary repigmentation and improvement of leukoderma in SD with excimer laser, although it failed to show any improvement in skin atrophy.52,53 To evaluate the true efficacy of the 308-nm excimer laser for darkening striae alba, 10 subjects were treated using the excimer laser on the white lines of striae, while the normal skin near to and between the lines was covered with zinc oxide cream. The results of this study showed the weakly positive effect of the 308-nm excimer laser in the repigmentation of striae alba.54
Copper Bromide laser: copper-bromide laser (577-511 nm) has been used for stretch marks. A clinical study was conducted in 15 Italian women with stretch marks, treated with the CuBr laser (577-511 nm) and followed-up for 2 years.55 The results of the study concluded that the copper-bromide laser was effective in decreasing the size of the SD and there were some pathogenic considerations that justified the use of this laser.
1,450-nm Diode Laser: The non-ablative 1,450-nm diode laser has been shown to improve atrophic scars and may be expected to improve striae. To evaluate the efficacy of the 1,450-nm diode laser in the treatment of striae rubra and striae alba in Asian patients with skin types 4-6, striae on one half of the body in 11 patients were treated with the 1,450-nm diode laser with cryogen cooling spray with the other half serving as a control.56 None of the patients showed any noticeable improvement in the striae on the treated side compared to baseline and to the control areas. The study concluded that the non-ablative 1,450-nm diode laser is not useful in the treatment of striae in patients with skin types 4, 5, and 6.
1,064-nm Nd:YAG Laser: A study was aimed to verify the efficacy of this laser in the treatment of immature striae in which 20 patients with striae rubra were treated using the 1,064-nm long-pulsed Nd:YAG laser.57 A higher number of patients (55%) considered the results excellent when compared to the same assessment made by the doctor (40%).
Intense Pulsed Light: In order to assess the efficacy of IPL in the treatment of striae distensae, a prospective study was carried out in 15 women, all of them having late stage striae distensae of the abdomen.58 All the study subjects showed clinical and microscopical improvement after IPL. It seems to be a promising method of treatment for this common problem with minimal side-effects, a wide safety margin and no downtime.
Fractional Photothermolysis: To determine the efficacy of fractional photothermolysis in striae distensae, 22 women with striae distensae were treated with two sessions each of fractional photothermolysis at a pulse energy of 30 mJ, a density level of 6, and eight passes at intervals of 4 weeks and response to treatment was assessed by comparing pre- and post-treatment clinical photography and skin biopsy samples.59 Six of the 22 patients (27%) showed good to excellent clinical improvement from baseline, whereas the other 16 (63%) showed various degrees of improvement. This study concluded that Fractional photothermolysis may be effective in treating striae distensae, without significant side effects.
Ablative 10,600-nm carbon dioxide fractitional laser: Ablative 10,600-nm carbon dioxide fractional laser systems (CO₂ FS) have been used successfully for the treatment of various types of scars. To assess the therapeutic efficacy of CO₂ FS for the treatment of striae distensae, 27 women with striae distensae were treated in a single session with a CO₂ FS and clinical improvement was assessed by comparing pre- and post-treatment clinical photographs and participant satisfaction rates.60 The evaluation of clinical results 3 months after treatment showed that two of the 27 participants (7.4%) had grade clinical 4 improvement, 14 (51.9%) had grade 3 improvement, nine (33.3%) had grade 2 improvement, and two (7.4%) had grade 1 improvement. None of the participants showed worsening of their striae distensae.To assess and compare the efficacy and safety of nonablative fractional photothermolysis and ablative CO(2) fractional laser resurfacing in the treatment of striae distensae, 24 ethnic South Korean patients with varying degrees of atrophic striae alba in the abdomen were enrolled in a randomized blind split study and were treated with 1,550 nm fractional Er:Glass laser and ablative fractional CO(2) laser resurfacing.61 These results of the study support the use of nonablative fractional laser and ablative CO(2) fractional laser as effective and safe treatment modalities for striae distensae of Asian skin with neither treatment showing any greater clinical improvement than the other treatment.
UVB/UVA1 Combined Therapy: Besides lasers, light sources emitting ultraviolet B (UVB) irradiation have been shown to repigment striae distensae. A study was conducted on 9 patients with mature striae alba who received 10 treatment sessions, and biopsies were taken at the baseline and end of the study.62 At the end of the study, all patients reported some form of hyperpigmentation that was transient and did not affect any surrounding tissues. No changes were seen on biopsy to indicate an effective remodelling collagen effect of the device, although it needs further assessment. Another study was conducted to analyse the histologic and ultrastuctural changes seen after UVB laser- or light source-induced repigmentation of striae distensae in which analyses of biopsied skin after treatment with both the UVB laser and light source showed increased melanin content, hypertrophy of melanocytes, and an increase in the number of melanocytes in all patients.63
Radiofrequency devices: RF devices are based on the principle of heat generation that occurs in response to poor electrical conductance according to Ohm’s law (heat generation is directly correlated with tissue resistance). The heat that is generated is sufficient to cause thermal damage to the surrounding connective tissue,64 which is responsible for the partial denaturation of pre-existing elastic fibers and collagen bundles.65 Initial collagen denaturation within thermally modified deep tissue is thought to represent the mechanism for immediate tissue contraction; subsequent neocollagenesis further tightens the dermal tissue and reduces striae.66 The efficacy and safety of combination therapy with fractionated microneedle radiofrequency (RF) and fractional carbon dioxide (CO2) laser in the treatment of striae distensae has been evaluated revealing that this combination therapy is a safe treatment protocol with a positive therapeutic effect on striae distensae.67 A recent study evaluating the effectiveness of a RF device in combination with PDL subjected 37 Asian patients with darker skin tone with SD to a baseline treatment with a RF device and PDL.68 All histological evaluations demonstrated an increase in the amount of collagen fibers, and six of the nine specimens showed an increase in the number of elastic fibers.TriPollar RF device appears to be a promising alternative for the treatment of striae distensae in skin phototypes IV-V.69
Needling therapy:
To evaluate the effectiveness and safety of a disk microneedle therapy system (DTS) in the treatment of striae distensae, 16 Korean volunteers with striae distensae alba or rubra were enrolled which received three treatments using a DTS at 4-week intervals.70 Marked to excellent improvement was noted in seven (43.8%) patients, with minimal to moderate improvement in the remaining nine. This study revealed that Disk microneedle therapy system (DTS) can be effectively and safely used in the treatment of striae distensae without any significant side effects. Another study assessed and compared the efficacy and safety of needling therapy versus CO2 fractional laser in treatment of striae and the results supported the use of microneedle therapy over CO2 lasers for striae treatment.71
Platelet-rich plasma:
Platelet-rich plasma has these wound-healing properties, affecting endothelial cells, erythrocytes, and collagen,72 which potentially aids in the healing of the localized chronic inflammation believed to be a factor in the aetiology of striae distensae. Platelet-rich plasma is well tolerated by the patients and is a safe and cost effective treatment option for striae distensae.
Platelet-rich plasma alone is more effective than microdermabrasion alone in the treatment of striae distensae, but it is better to use the combination of both for more and rapid efficacy.73
The plasma fractional radiofrequency and transepidermal delivery of platelet-rich plasma using ultrasound has also been found to be useful in the treatment of striae distensae.74
Since thermal damage from intradermal RF has characteristics similar to those of many wounds, combination treatment with intradermal RF and autologous PRP would eventuate in enhanced localized collagen neogenesis and redistribution. In one of the studies, three sessions of intradermal RF were used combined with autologous PRP administered once every four weeks.75 All of the participants showed satisfactory changes and no patient was reported to show no improvement.
Transepidermal retinoic acid:
Transepidermal retinoic acid delivery using ablative fractional radiofrequency associated with acoustic pressure ultrasound has also been used for the treatment of stretch marks.76
Conclusion
Striae distensae are an extremely common, therapeutically challenging form of dermal scarring. Adequate scientific knowledge and the evidence behind both preventative and therapeutic agents are vital in order to understand striae and to offer patients the best therapeutic options. The treatment of this cosmetically distressing condition has been disappointing and there is no widely accepted surgical procedure for improving the appearance of stretch marks. Laser therapy has been advocated as a treatment for striae distensae.
Hepatitis viruses are the most widespread cause of hepatitis and some cancer lymphomas in humans1.Hepatitis is a serious disease of the liver and described as a lifelong infection with swelling and inflammation (presence of inflammatory cells) in the liver, that if progresses, may lead to cirrhosis (scarring) of the liver, liver cancer, liver failure, and death. Hepatitis B (HBV) and Hepatitis C (HCV) are one of the viral types of hepatitis that leads to jaundice (a yellow discolouration of skin, mucous membrane and conjunctiva of the eye), anorexia (poor appetite), fatigue and diarrhoea and presumably it remains undiagnosed and leads to chronic carrier state but most infected individuals remain asymptomatic1-3. The hepatitis B virus is a DNA virus belonging to the Hepadnaviridae family of viruses and hepatitis C virus is a small, single stranded, RNA virus with a diameter of about 50 nm, belonging to Flaviviridae family of virus. Hepatitis B surface antigen (HBsAg) is present in the serum of those infected with hepatitis B, consisting of the surface coat lipoprotein of the hepatitis B virus. Anti-HCV antibody, a substance that the body makes to combat HCV 4. Hepatitis B virus is transmitted through blood and blood products, semen, vaginal fluids, and other body fluids. Hepatitis C virus is a blood borne or parenterally transmitted infection. Vehicles and routes of parenteral transmission include; contaminated blood and blood products, multi-transfusions (thalassemic and haemophilic patients), needle sharing, contaminated instruments (e.g. in haemodialysis, reuse of contaminated medical devices, tattooing devices, acupuncture needles, razors) and occupational and nosocomial exposure5-8. It stands to reason that an occupational risk for transmission of hepatitis virus in the health care setting, where unknown carriers of hepatitis infections are undergoing different procedures, in which there is a chance of contact of percutaneous blood, including transmission from infected patients to staff, from patient to patient, and from infected providers to patients9. There is a lack of routine serological screening prior to surgery, which is one of the factors responsible for increased disease transmission. The major risk factors include; re-use of contaminated syringes, surgical instruments and improperly screened blood products2. Without meticulous attention towards infection control and disinfection and sterilization procedures, the risk for transmission of blood borne pathogens in the health care setting is magnified.
The aim of our current study was to estimate the incidence of Hepatitis B and Hepatitis C among patients going through eye surgery at department of ophthalmology Liaquat University of Medical and Health Sciences Jamshoro at Hyderabad. This is one of the largest tertiary care centres in Sindh. This institution is a great referral centre for whole interior Sindh province.
MATERIAL AND METHODS
Study design and patients
This prospective observational study was carried out at Liaquat University Eye Hospital, Hyderabad, from June 2014 to February 2016. A total of 2200 patients undergoing eye surgery, who were unaware of hepatitis B & C infection were included in this study. No restriction was placed based on age and gender to ensure maximum participation.
Blood samples
The blood samples of all these patients were collected in the Hospital laboratory, Scientific Ophthalmic Diagnosis & Research Lab. Each patient was serologically screened, by using immuno-chromatography (ICT method) for qualitative detection of antigen for Hepatitis B and Hepatitis C virus antibodies, to find the carrier status of patients before surgery.
The blood was collected by a qualified technician / phlebotomist of our hospital laboratory under supervision of a consultant pathologist. Samples were allowed to coagulate at room temperature for 30 minutes, and then centrifuged at 3000 revolutions per minute (RPM) for 10 minutes. The serum samples were separated and kept frozen at -20°C for chemical and immunoassays. The HBV screening was based on the detection of antigen and detection of viral specific antibodies HCV in the sera using enzyme immunoassays. The test only shows whether a person has ever been infected by HBV or HCV, and not whether the virus is still present. According to the manufacturers’ literature, the relative sensitivity and specificity of HCV and HBV testing kits was 96.8% and 99% respectively.
Those patients with test results were found positive on screening test, were further confirmed by testing ELISA (Enzyme-Linked Immunosorbent Assay) method (4th generation ELISA) and were given advised for further testing on Polymerase Chain Reaction (PCR) for qualitative or quantitative detection of DNA/RNA (the viral gene).
All the data was entered in SPSS version 16 and the prevalence and percentage of all variables was measured.
RESULTS
A total number of 2200 patients were operated during the study, 1255 (57.04%) patients were male and 945 (42.95%) were female
Of these 2200 patients, 338 (15.36%) were serologically positive for hepatitis B virus & hepatitis C virus. Out of the 338 HBV or HCV positive patients, 56 patients (2.54%) were positive for hepatitis B surface antigen (HBsAg) and 282 patients (12.81%) were positive for hepatitis C antibody (HCVAb). (Figure 1&2). The majority of them were female, and 226 (66.86%) were in their 4th and 5th decade of life in both sexes (Figure 3).
Figure 1- A: Serologically positive for hepatitis C antibodies, B: Serologically negative for hepatitis C antibodies
Figure 2 - A: Serologically negative for hepatitis B antigen, B: Serologically positive for hepatitis B antigen
Figure 3: Incidence of Hepatitis B & C in different age group
DISCUSSION
Hepatitis B virus (HBV) and hepatitis C virus (HCV) are among the principal causes of liver diseases, with different frequency rates and various types all over the world. The World Health Organization (WHO) estimates that there are near 4 million people with chronic HBV infection and 170 million people with chronic HCV infection worldwide. Mortality rate of Hepatitis B is estimated to result in 563,000 deaths and hepatitis C in 366,000 deaths annually 1, 6, 10-12. The occurrence of hepatitis varies from country to country. The epidemiological estimates by WHO show that there is low prevalence of hepatitis C (<1%) in Australia, Canada and Northern Europe, and almost 1% in countries of medium endemism, such as the USA and most of Europe. The frequency is at its peak (>2%) in many countries of Africa, Latin America, Central and South-East Asia 5. As far as the Pakistani population is concerned, the incidence of hepatitis B and C is escalating. Previous studies reveal that the Pakistani population affected by HBV 10% and HCV is 5-10% 3. At times it will also vary among different regions of the same country, and is continuing to rising in certain parts, especially in the rural areas, the percentage of infected individuals is significantly higher 2. The total incidence of Hepatitis B and Hepatitis C in our study was found to be 15.36%. This was almost comparison to Naeem et al found to be 12.99% 2 and in our previous study reported incidence of anti HCV was 29.60% 7. W Ul Huda et al reported 17.33% incidence of HCV infection among their operated patients5, whereas a study conducted by Khurrum et al reported 6% incidence of anti HCV antibodies in health care workers in a local hospital13. The prevalence of Hepatitis B and Hepatitis C in preoperative cataract patients was found to be higher in males (59.18%) than females (40.82%), and Ahmed et al also showed that the total prevalence of Hepatitis B and Hepatitis C in males was very high compared to females among preoperative cataract patients14, which is controversial to our result. A study conducted in 2010 on different eye camps in Pakistan showed that 108 out of 437 patients were infected with Hepatitis B and Hepatitis C with a higher prevalence of the diseases in females with 60.18% (65/ 108) than in males with 39.81% (43/108) 15. Concerning demographic variables, the increase in the risk for HCV seropositive incidences increased with the age i.e. 7.1% at the age of 20 to 30 years whereas 21.4% at the age of 40 to 50 years. In our study, the higher prevalence of hepatitis B & C were in the age range of 30 – 60 years, which is comparable to the study of Talpur et al, in which 65% positive patients were above the age of 40 years16.
This study shows that the prevalence of these hepatitis causing viral pathogens are quite high. Doctors and paramedical staff in surgical and medical practice are at high risk of acquiring blood borne diseases from the patients on whom they operate.
CONCLUSION
The aim of the present study was to assess the prevalence of HBV and HCV infection among preoperative patients. The incidence of these hepatitis causing viruses are higher in our population. Therefore, it is a mandatory task to screen every patient for hepatitis B and C before any surgical procedure. The surgeons and health care professionals should protect themselves by using protective masks, eye protection glasses, and double gloves before handling infected cases. The used infected material, needles and other waste material should be destroyed properly using Biosafety protocols.
One in four adults are affected by musculoskeletal (MSK) problems, which account for up to 30% of General Practice (GP) consultations in the United Kingdom..1 Some GPs have direct access to community MSK services, but when not available, referrals are made to secondary care departments such as rheumatology. MSK training involves the skills that a rheumatologist needs to achieve competencies in the diagnosis and treatment of soft tissue rheumatism as opposed to inflammatory rheumatic joint disease.
It has been reported that junior doctors in the United Kingdom fail to routinely screen for MSK conditions on admission onto general medical or surgical wards2 which may be reflective of training issues. It was in our anecdotal opinion that MSK training at higher specialist training was being compromised as well. Within the United Kingdom doctors in training typically begin work as a first year rheumatology trainee four years after graduation from medical school following completion of both a two year foundation programme (encompassing a generic training programme which forms the bridge between medical school and specialist/general practice training) and a two year Core Medical Training programme, (involving 2 years of training, undertaking between four and six rotations in different medical specialties). At the time of writing, higher specialty training, such as in rheumatology, began at the level of Specialist Trainee 3 (ST3) and was either a four year training programme or a 5 year training programme if trainees were dually accrediting in general medicine.3 Higher specialist training involves rotating through different rheumatology departments within each Local Education Training Board (LETB).
In our opinion, the basic MSK skill set is essential to the training of a competent rheumatologist and trainees gain overall MSK competencies within routine clinical practice as they rotate through different hospitals during training. However, in some training programmes, there is very little MSK training opportunities, as MSK centres operating in the community in the United Kingdom, mean that these patient groups are not being treated in training hospitals. Faculty in these centres are competent to train, but training opportunities in MSK centres are reduced.
Rheumatology registrars in-training have expressed the anecdotal view that MSK training may be compromised, partly due to the reduction of referrals to secondary care and partly due to the inevitable focus on training in the more complex inflammatory conditions.
Rheumatology trainees in the UK were surveyed in 2015 on behalf of the Rheumatology Specialist Advisory Committee to assess confidence and ability in dealing with MSK conditions during and on recent completion of training. The survey was disseminated to rheumatology trainees via the trainee representative from each LETB.
77 responses were received across 15 LETBs from a total of an estimated 223 trainees. 20 of these surveys were incomplete, with not all questions being answered but those questions answered were considered in the results of this survey. Responses from trainees across all career grades from ST3 (1st year of specialist training) to 2 years post Certificate of Completion of Training were received.
58 out of 63 doctors (92%) thought MSK medicine to be an important part of rheumatology training. Free text comments recognised that MSK conditions were frequently referred to rheumatology and differentiating between inflammatory and non-inflammatory pain is important.
Only 41 out of 64 doctors (64%) felt they managed patients with soft tissue pathology on a daily basis and 20 out of 63 (32%), felt they were not yet confident in diagnosing and distinguishing between different types of soft tissue pathologies.
Exposure to, and experience with MSK medicine in current jobs and throughout training ranged from poor to excellent.
Only 9 out of 58 trainees (16%) felt they were lacking in competency for their level of training in managing the MSK pathologies outlined in the Joint Royal Colleges of Physicians Training Board (JRCPTB) 2010 rheumatology curriculum. The majority of trainees felt they were either partially competent in all, or some areas, satisfactory for their level of training.
Interestingly, only 39 out of 58 trainees (67%) felt their training in injection techniques had been at least ‘adequate’. Some trainees mentioned they had been self-taught in some injection procedures and training had been limited in certain soft tissue injections (most commonly plantar fasciitis, tendon sheath and elbow enthesis injections).
This survey has limitations in that the numbers of trainees surveyed were small. However, our total response number considering the usual poor response rate for online surveys is reasonable. Our survey was not validated and it is likely that there will be an element of selection bias in the responses received.
However, one of the strengths of our survey is the ability to review responses by seniority. We analysed further the confidence rating according to training level grade and we looked into two main subgroups, the more junior trainees (ST3 and ST4s) and the more senior trainees (ST6 and ST7). As expected the more junior cohort rated their confidence slightly lower compared to the more experienced group. Within the junior group (n=17) only 41% suggested they felt confident for their level of training when generically asked about their general diagnostic skills on MSK, which improved to 59% when this question was mapped to the curriculum. In the senior group of ST6 and ST7 (n=25), the confidence levels were significantly higher (80% felt confident appropriate to their level of training) and there was no change in confidence levels when skills were mapped to the curriculum. (Table 1). This may reflect the natural increase in experience and exposure to MSK medicine with progression in training, but also the better understanding of the curriculum requirements by the more senior trainees.Only one fully completed survey was received from a rheumatologist post Certificate of Completion of Training making this subgroup too small for further analysis.
Table 1:
(n)
Q6) Confidence in dealing with MSK
% Q6
Q9) Confidence mapped to curriculum
% Q9
ST3 and ST4- junior
17
7
0.41
10
0.59
ST6 and ST7- senior
25
20
0.8
20
0.8
Q6) How confident do you feel in diagnosing and distinguishing between different types of soft tissue pathologies/MSK in your daily practice? Q9) Do you feel competent in diagnosing and managing the above MSK pathologies outlined in the 2010 rheumatology curriculum?
Within this limited survey, the views of 77 trainees have shown that training in MSK could be improved for rheumatologists in training at all levels. Although trainees felt they were lacking confidence in dealing with certain areas of MSK medicine, when competencies were mapped out to the rheumatology curriculum, trainees felt they were achieving appropriate competency for their level of training although this was not assessed objectively.
The trainees’ perception of the level of competency needed in dealing with MSK conditions seemed to overestimate the requirement of the 2010 rheumatology curriculum. In clinical practice, trainees may feel they encounter different MSK pathologies, which they are being expected to manage which are not being given sufficient emphasis within their curriculum. Further questioning in this area may conceivably lead to adjustments within the curriculum and the training programmes.
In particular, to improve training in MSK medicine, rheumatology trainees valued teaching from physiotherapists and being able to attend specialist sports medicine clinics. Trainees who had ‘independently’ taken time to gain experience in this way felt that their training had benefitted. To support trainees in achieving these competencies it may be worthwhile adding a prerequisite in the Annual Review of Career Progression (ARCP) process (a formal method in UK medical training by which a trainee’s progression through their training programme is monitored and recorded) to ensure dedicated time is set aside for this aspect of MSK rheumatology training. Completion in a range of direct assessments such as Clinical Evaluation Exercises (miniCEX), and DOPs (Directly Observed Procedures) may ensure competency in this aspect of rheumatology training as well as secure confidence in dealing with MSK conditions and soft tissue pathology.
With changes in the nature and geography of rheumatology specialist services we feel these aspects of rheumatology training should not be overlooked so trainees are equipped to deal with MSK conditions independently by their completion of training.
Like doctors in other specialties, general medical practitioners (GPs) are exposed daily to human suffering which most of society try to avoid.1 The World Health Organisation (WHO) predicts that by 2030 depression will be ‘the leading cause of disease burden globally.’ And that 1 in 4 individuals seeking health care are ‘troubled by mental or behavioural disorders, not often correctly diagnosed and/or treated.’2, 3 Doctors suffer from stress, anxiety and depression (as well as vascular disease, cirrhosis of the liver and road traffic accidents) more than the general population.4-10 Help for doctors is inadequate and doctors are reluctant to seek help.1, 4, 11 Where improvement is suggested, it is usually as counselling and general support.11 Instead of ‘more of the same’, we suggest a radically different approach: Adaptation Practice, which Clive Sherlock pioneered and has taught since 1975. It is pragmatic and safe. This study tests its acceptability to a group of working doctors.
Doctors bear the responsibility for fellow human beings’ health, well-being and, often, for their very survival. Added to this, GPs are under increasing pressure from more patients who want more cures and from health service managers who demand clinical excellence and more administration and more managerial skills of them. GPs’ stress is related to increasing workloads, changes to meet requirements of external bodies, insufficient time to do the job justice, paperwork, long hours, dealing with problem patients, budget restraints, eroding of clinical autonomy, and interpersonal problems.6, 10, 12 The recent rise in the GMC’s Fitness to Practice complaints related to patients’ expectations of doctors is yet another stress making them feel threatened.13
Job satisfaction for GPs is at its lowest level since a major survey started ten years ago, while levels of stress are at their highest. In 2015 there had been a year on year increase in the number of GPs reporting a slight to strong likelihood of their leaving ‘direct patient care’ within five years, with 53% of those under 50 and 87% of those over 50.6
By nature and vocation, GPs want to help but too much pressure is unbearable and takes its toll. They work, not with numbers, data or profits, but with human suffering, which, inevitably, is an emotional burden because of compassion and because it makes them aware of their own vulnerabilities and mortality. 3, 14 When combined with heavy workloads and low morale, doctors themselves inevitably suffer emotionally and psychologically.7, 10, 14 At the same time they and others feel they should be invincible.1, 15-17 What professional help is available for them is inadequate.3, 4, 18, 19 Existing support services in the UK are underfunded and sporadic.4 Some are outsourced to counselling services, and some of these are by telephone. Doctors do not like to be counselled and are reluctant to use these services.4, 15, 17
Doctors themselves are the mainstay for diagnosis and treatment of mental illnesses but are not adequately trained.3, 20 Mental illness is not well understood and conventional treatments are insufficient and often harmful. 15, 20-23 Consequently, doctors do not have the wherewithal to deal with the emotional and psychological problems they face every day in their patients and often in themselves, their colleagues and their families.4, 12, 18
There is significant prejudice, stigmatisation and intolerance of mental ill health within the medical profession due to lack of understanding and fear.3, 4, 9, 15, 17, 20, 21, 22, 24 This not only affects how doctors treat their patients, it also exacerbates their own difficulties when they suffer with emotional and psychological problems themselves, and dissuades them from self-disclosure and from seeking professional help.3, 4, 8, 10, 18, 20, 25, 26 To succumb to stress, anxiety and depression is seen as being weak and inadequate as a person and in particular as a doctor. 3, 4, 15 Doctors think they should know the answers and should be able to cope.1, 4
However, doctors are willing to learn work-related skills as this present study set out to show.11 Adaptation Practice is training; not treatment or therapy. The course in this study was presented as a postgraduate programme for doctors to learn how to cope with stress, anxiety and depression.
METHOD
Recruitment
We asked by letter all 314 GPs registered in one UK urban and semi-rural Health Authority Area if they would be interested in a course of twelve fortnightly seminars to learn the basics of Adaptation Practice: a programme of self-discipline to cope with stress, anxiety and depression. Included, was the Hospital Anxiety and Depression Scale (HADS). Those who responded and whose HADS scores were ≥ 8 (the threshold for anxiety and depression) were invited to the course.
Stress, anxiety and depression
Anxiety and depression were assessed by the HADS and stress by a simple stress scale (SSS – see Table 1) one month before training started, immediately prior to training, at three months (mid-way through the training) and at six months (at the end of training).
Table 1: The Simple Stress Scaledeveloped by Clive Sherlock and used to assess the level of stress in a subject. A total score ≥ 8, out of a maximum of 24 is suggestive of a disturbing level of stress or burnout.
I feel I am under too much stress: 0 hardly ever 1 occasionally 2 most of the time 3 all the time
I feel exhausted: 0 seldom 1 some of the time 2 much of the time 3 most of the time
I care about other people: 0 as much as I ever did 1 rather less than I used to 2 definitely less than I used to 3 hardly at all
I have lost my appetite: 0 not at all 1 a little 2 moderately 3 significantly
I sleep well: 0 most of the time 1 quite often 2 occasionally 3 not at all
I am irritable: 0 not at all 1 occasionally 2 quite often 3 very often indeed
I feel dissatisfied: 0 never 1 occasionally 2 quite often 3 most of the time
I feel run down: 0 not at all 1 occasionally 2 quite often 3 most of the time
Evaluation of Adaptation Practice
Half of those GPs who applied for the course were unable to attend because of prior commitments on the days planned for the course. These acted as a control group. Those who attended the course were the study group. All those who attended were also assessed in private by the authors immediately before and throughout the course. At the end of the course the doctors wrote anonymous self-assessments.
Training in Adaptation Practice
Those attending the course were taught not to express and suppress upsetting and disturbing emotion, not to distract their attention from it (including not to think about it and not to analyse it) and not to numb themselves to it with chemicals (alcohol, recreational drugs or prescribed medication). Instead, they learned how to engage with their moods and feelings physically, not cognitively, and how not to engage with thoughts about them. They were instructed to practise this six days a week with whatever they were doing, wherever they were. They were all offered unrestricted confidential telephone and e-mail support from the authors between training sessions and after the course had finished.
Statistical Analyses
The results are reported as means ± standard errors of the means. The scores were normally distributed and the data were analysed by analysis of variance with additional paired comparisons within periods, using the LSD method. Correlations were determined using Pearson’s correlation. The analyses were carried out using the statistical software programme SPSS 17.0 for Windows.
RESULTS
Recruitment
Of 314 registered GPs, 225 (72%) responded to our initial contact, and of these 152 (68%) said they would be interested in participating in the training course. Recruitment was restricted to those with HADS scores ≥ 8. Of the 225 who responded there were 71 (32%) for anxiety, 35 (16%) for depression, and 79 (35%) for both. 29 (13%) applied to attend the course. All were experienced GPs. 15 of these attended and 14 were not able to attend because of pre-existing commitments on the course dates. They asked for alternative dates but these were not available.
At the initial assessment (one month before the course started) there were significant correlations between the scores for anxiety and depression (P < 0.001), anxiety and stress (P < 0.001) and depression and stress (P < 0.001). At the second assessment immediately before the course started these correlations remained highly significant.
Effects of Adaptation Practice
All those who attended the course reported a subjective improvement in their abilities to cope with their own stress, anxiety and depression, and in their sense of well-being.
Anxiety
There were no significant differences between the control and study groups either one month before the start (P=0.949) or immediately before the first session (P=0.914). The anxiety scores in both groups remained greater than 8 at both assessments (Figure 1). At the mid-point of the course the mean score had fallen slightly in the study group (Figure 1) but the difference was not significant (P=0.652) By the end of the course the mean anxiety score in the study group was significantly lower (P=0.008) than that of the control group (Figure 1). The mean scores for anxiety decreased over the 4 assessments. This tendency was significant in the study group (P=0.002) but not in the control group (P=0.567).
Figure 1: The mean anxiety scores and standard errors of the means (SEM) for a control group and a study group of doctors with pre-existing signs of anxiety, assessed twice before, once during and once at the end of a six-month course in Adaptation Practice.
Depression
There were no significant differences between the control and the study groups either one month before the start (P=0.310) or immediately before the first session (P=0.880). The mean HADS scores for depression before training were all greater than 8 (Figure 2). At three months (the mid-point of the course) the difference between the mean scores in the two groups was not significant (P=0.631). At the end of the course the mean depression score in the study group was significantly lower (P=0.046) than the control group (Figure 2). The mean scores for depression decreased over the 4 assessments. This tendency was significant in the assessment group (P=0.003) but not in control group (P=0.689).
Figure 2: The mean depression scores and standard errors of the means (SEM) for a control group and a study group of doctors with pre-existing signs of depression, assessed twice before, once during and once at the end of a six-month course in Adaptation Practice.
Stress
There were no significant differences between the control group and the study group either one month before the course started (P=0.234) or immediately before it started (P=0.505). The stress scores were all greater than 8 (Figure 3). At three months (the mid-point of the course) the difference between the mean scores between the two groups was not significant (P=0.621). At the end of the course the mean stress score in the study group was lower (P=0.077) than that of the control group (Figure 3). The mean assessment scores for stress decreased over the 4 assessments. This decrease was significant for the assessment group (P=0.001) but not for the control group (P=0.425).
Figure 3: The mean stress scores and standard errors of the means (SEM) for a control group and a study group of doctors with pre-existing signs of stress, assessed twice before, once during and once at the end of a six-month course in Adaptation Practice.
Correlations
At all four assessments there were correlations among all three psychological parameters. At the initial assessment the correlation between anxiety and depression (r2= 0.405; P = 0.029) and between depression and stress (r2= 0.800; P < 0.0001) were significant but the correlation between anxiety and stress was not (r2= 0.253; P = 0.185). At the commencement of the course the correlation between anxiety and depression (r2= 0.479; P = 0.009), between depression and stress (r2= 0.765; P < 0.0001) and between anxiety and stress (r2= 0.486; P = 0.007) were all significant.
At three months (the mid-point) the correlation between anxiety and depression (r2= 0.526; P = 0.003), between depression and stress (r2= 0.622; P < 0.0001) and between anxiety and stress (r2= 0.790; P < 0.0001) were all significant and similarly at the and of the course: the correlation between anxiety and depression (r2= 0.604; P = 0.001), between depression and stress (r2= 0.577; P =0.001) and between anxiety and stress (r2= 0.740; P < 0.0001) were also all significant.
Assessments of the doctors’ psychological states and methods of coping
The doctors attending the course were assessed individually in private. They variously complained of stress, anxiety and depression. Notable findings included suicidal thoughts, plans for suicide, self-medication, excessive consumption of alcohol and an intention to leave the medical profession because of the unbearable pressures involved.
By the end of the course all these signs and symptoms had improved and the doctors felt confident in their ability to cope not only with pressures from outside but also with emotion, moods and feelings inside. One doctor still wanted to leave the profession but less adamantly than before, and stayed.
There was no qualitative assessment of the control group.
Qualitative Self-assessments
The anonymous self-assessment reports give meaningful, subjective accounts of what the doctors experienced individually. They fall into four main themes. There were no negative comments.
Connecting with emotion physically in the body
The following comments indicate contact with emotion:
‘I am more aware of my feelings.’
‘It is difficult to say “Yes” to unpleasant or upsetting feelings and situations. I have always preferred to avoid them and I have had a lifetime of suppressing emotions, so it is very difficult to say “Yes” to them, but this is what I am now doing.’
‘Since I’ve been more aware of my feelings there has been an enormous improvement in concentration.’
Developing inner emotional strength and coping.
A number of comments indicate the need to develop the strength to contain emotion physically in the body:
‘I am more accepting of daily stresses at work.’
‘I try to deal with problems instead of feeling so desperate and so wronged by them.’
‘I am calmer, and I lose my temper less often and less dramatically.’
‘The Practice was difficult initially because of my own resistances to it.’
‘I’ve always avoided seeking help for myself. I often feel worse than the patients I prescribe antidepressants for. I can now cope and I feel stronger but I don’t feel I’ve been treated and I now realise I didn’t need treatment: I needed to learn what to do and how to do it.’
Dealing with unpleasant, unwanted thoughts.
These comments illustrate the doctors’ new reactions to thoughts as they started to address the underlying emotion that normally drives worrying thoughts:
‘I now have less ruminations.’
‘As a long-standing ruminator I now realise these thoughts are the source of many anxieties. Thoughts were the main problem for me.’
‘I have learned to deal with obsessional thoughts by not giving time to them.’
General well-being.
The doctors commented on their sense of general well-being and ability to cope:
‘I am less tense and less anxious.’
‘I am now coping with episodes of work overload much better.’
‘I am feeling better generally.’
‘This has given me confidence to pursue the course of action I knew was correct.’
‘There are all-round improvements because of adapting myself to work and other people.’
‘I am happier and more content, optimistic and much less negative.’
DISCUSSION
Varying degrees of stress, anxiety or depression are universal.16, 26 Only about half of those thought to be clinically affected by these conditions seek help for them.26 If put into practice, sound medical knowledge and training can be beneficial to doctors’ own health.1, 11, 15 This does not seem to be true for stress, anxiety and depression.22 Too little is known about emotional and psychological problems, and treatment for them is inadequate.2, 9, 15, 17, 19, 21, 23, 28, 29
In this study, there was a high level of interest in how to deal with stress, anxiety and depression. Almost one third of respondents had scores on the HADS and SSS that suggested worrying levels of emotional and psychological problems amongst these working GPs. The fact that 152 doctors (68% of respondents) declared an interest in a six-month evening course (90 minute sessions after work on Thursday evenings) to learn how to deal with these conditions, suggests that:
stress, anxiety and depression are significant problems either amongst their patients or for the GPs themselves, or both
GPs are not confident in their ability to deal with them and want to learn more
although they have a strong tendency not to admit that they cannot cope and not to seek help, doctors are willing to attend a course to train and to learn.11
Most doctors tell their patients to seek professional help and to talk about their feelings but do not do so themselves.3, 5 They prescribe drugs for their patients that either the doctors will not take themselves or that they take but find ineffective. Of the 15 GPs on the course only one had mentioned psychological difficulties to a colleague and one to a partner and both only reticently. 16
ADAPTATION PRACTICE
Adaptation Practice strongly discourages self-disclosure, except in private to the Adaptation Practice teacher, which is necessary in order to assess the nature and severity of any problems and to lay the foundations for a rapport. Adaptation Practice sessions involve detailed discussion of moods and feelings as physical sensations and powerful forces that affect behaviour in all human beings. The ethos in Adaptation Practice is for participants to learn from their own experience how they are affected by emotion and how they can change this by containing themselves and not letting the emotion control them. It is not to criticise, judge, blame or condemn. Consequently, without the causes of stigmatisation, there is no prejudice and no stigma; instead there is respect and dignity and a pragmatic attitude to change.16
Adaptation Practice trains individuals to bear and endure upsetting, disturbing emotion by not expressing it, not suppressing it, not distracting themselves from it and not numbing themselves to it with drugs (alcohol, recreational drugs or prescribed medication). Bearing it this way develops emotional strength and resilience.
The high level of interest, the willingness to attend in groups and the positive results from this study indicate that Adaptation Practice is an acceptable way of teaching doctors how to cope with their own stress, anxiety and depression, that makes sense intellectually and emotionally. This, as well as the pragmatic approach mentioned above, makes Adaptation Practice radically different from other approaches.
GENERAL COMMENTS
Given that those who could not attend asked for an alternative day to attend, gave us reason to assume that the manner in which the study group and the control group were selected – individual availability on a given week night – would not have biased the sampling procedures and it seems reasonable to assume that the two groups did not differ in any meaningful way that would have biased the outcome.
Not surprisingly, there were strong positive correlations between anxiety and depression and between depression and stress on all assessments and between anxiety and stress on all but the first assessment, suggesting a strong association among these parameters of psychological states.
The mean scores for anxiety, depression and stress fell significantly in the participating GPs compared with the control group. The subjective reports from both the medical assessments and the self-assessments support these changes in the study group.
This study begs a number of important questions:
if doctors are prejudiced and stigmatise mental illness amongst themselves then what are their conscious, or unconscious, attitudes to mental illness in their patients? 3, 5, 9, 24
if doctors cannot cope with emotional problems themselves then whom are they and their patients to turn to for help? This is not the same as doctors suffering from physical conditions requiring surgery, or medications such as insulin or antibiotics.3, 5 Doctors expect, and are expected, to be able to cope with their emotions and, if necessary, that the treatment they give their patients will also work effectively for themselves
what are doctors to do and what are their patients to do, when doctors succumb to their own moods and feelings?
Adaptation Practice could be integrated in the medical curriculum at undergraduate and postgraduate levels, including Continuing Professional Development (CPD). Not only GPs but all doctors and other healthcare staff (nurses, physiotherapists, occupational therapists, social workers, etc.) could develop emotional resilience – which the GMC have proposed in recent years – and a better understanding of emotional and psychological problems and mental illnesses.
It is hoped that this preliminary study will stimulate and encourage a new way of looking at and investigating emotional and psychological problems and lead to further evaluation of Adaptation Practice.29, 30
With adequate training doctors and psychologists could teach Adaptation Practice.
The CASC (Clinical Assessment of Skills and Competencies) has been running since 2008 and is the final membership examination for the Royal College of Psychiatrists (MRCPsych).1 It is a clinical examination and follows an OSCE format (Objective Structured Clinical Examination), where candidates move through 16 short stations.2,3 We have been running a mock CASC in the West of Scotland for the last few years and have received consistently good feedback from candidates. This article describes our experience of organising the mock exam.
Step 1: Practicalities
The organising committee
Our mock CASC is arranged by the organising committee for the local core psychiatry education programme (MRCPsych course). This committee is comprised of a consultant chair, higher trainee chair and one or two trainee representatives from each higher subspecialty and each core training level. The higher trainee chair takes the lead with organising the mock, with the support of other committee members. This works well, as the trainees have recent experience of sitting or preparing for the exam and are enthusiastic about medical education.
Support from our postgraduate operations manager is invaluable. She works closely with the committee to book the venue and actors, and order equipment such as screens and a bell. She also has a key role in advertising the mock exam to trainees, booking places and being a point of contact for candidates. She assists with set up on the day of the exam and prints station instructions, marking schemes and labels for candidates.
Venue
Our mock exam is held in the same venue as our core trainee educational programme. We have 2 rooms to use for stations and a waiting area for candidates. Screens are borrowed from Glasgow University Medical School to create separate stations.
Timing
The mock exam takes place around 2 weeks before the CASC. This enables candidates to have prepared for the exam and leaves some time to work on any issues identified by the mock. Planning usually starts 4-5 months in advance of this, with increasing intensity and time commitment as the exam approaches.
Step 2: Mock Exam Format
CASC format and blueprint
The CASC itself includes morning and afternoon circuits, which all candidates will move through. There are 16 stations in total, with 90 seconds between each to read the task instructions. The morning session comprises 4 pairs of ‘linked’ stations, lasting 10 minutes each. In these paired stations, the second station is connected to the first in some way, such as taking a history in the first part then discussing with a family member in the second. Each station is marked independently. The afternoon consists of 8 single stations, lasting 7 minutes each. A passing list is posted online after a few weeks, with specific feedback made available to unsuccessful candidates.2
Mock CASC format
Our mock CASC is run in one afternoon session from 1.30-5pm. We have been able to include 16 stations by running the 2 circuits simultaneously. To make this possible, each station is 7 minutes in duration, with 90 seconds between stations. 4 candidates start after a delay, as it is not possible to start on part 2 of a linked station. Trainees are allocated candidate numbers and starting stations for both circuits to coordinate this effectively (figure 1). 16 candidates can take part in the mock exam.
Figure 1. Candidate numbers
Candidate Name
Number
1st loop starting station
2nd loop starting station
1
1a
5
2
2a
6
3
3a
7
4
4a
8
5
1a (8.5 min delay)
9
6
2a (8.5 min delay)
10
7
3a (8.5 min delay)
11
8
4a (8.5 min delay)
12
9
5
1a
10
6
2a
11
7
3a
12
8
4a
13
9
1a (8.5 min delay)
14
10
2a (8.5 min delay)
15
11
3a (8.5 min delay)
16
12
4a (8.5 min delay)
At least 3 other higher trainee helpers are recruited to assist the coordinator on the day of the exam. The same marking scheme is used for each station, covering domains common to all stations, such as building rapport and range and depth of questioning. There is also space for specific feedback, which examiners are encouraged to provide, though they do not interact with candidates directly during the exam. Forms are completed contemporaneously and distributed to candidates immediately after the mock exam.
Step 3: Writing the Stations
The content of the CASC follows a blueprint, which is available through the RCPsych website. A variety of skills are tested during the 16 stations of the exam, including history-taking, mental state examination, risk assessment, cognitive examination, physical examination, case discussion and difficult communication.3 We refer to the blueprint when selecting which stations to include in the mock. Each year, recent CASC candidates are asked to suggest stations and we combine these with previous stations to construct the mock exam. New stations are written by the trainee who suggested them, including candidate instructions and actor’s notes. The higher trainee organiser formats these to maintain consistency across the mock exam.
Role-play actors
For our most recent mock CASC, we employed paid actors for every station. These actors are part of a local agency which has experience in working as simulated patients for Glasgow University exams and communication skills sessions. In previous years, we have recruited a combination of core/higher trainees and actors. We have found pros and cons to each approach. The use of paid actors was more realistic for trainees but writing scripts proved more challenging as instructions required greater detailed in relation to specific psychiatric information, such as how a person with mania may present.
Step 4: Recruiting Examiners
There are a number of local consultant psychiatrists who are actively involved in teaching. They are supportive of the mock CASC and enthusiastic about examining stations. After the stations have been devised, consultants are invited to examine based on their areas of expertise. Higher trainees with particular interest in education are then asked to examine any remaining stations. Candidate instructions and actor’s notes are circulated to examiners in advance (figure 2). A simple guide to the mock exam is also sent to any new examiners. As we use the same marking scheme for each station, it is down to the experience of the examiner to consider whether the candidate has addressed the specific tasks appropriately.
Figure 2. Station template
Station
Title
Actor
Examiner
Written?
Sent?
1a
1b
2a
2b
3a
3b
4a
4b
5
6
7
8
9
10
11
12
Step 5: The Day of the Mock Exam
The higher trainee coordinators arrive at least 1.5 hours prior to the start time to set up the venue. Examiners, actors and candidates are asked to arrive 15-30 minutes before the scheduled start time. Each group is briefed on the exam format and given the opportunity to ask questions. Actors and examiners are shown to their stations and allowed time to discuss them. Candidates are provided with numbered labels to wear, for examiners to record on their marking sheets. The 4 candidates who will be starting later are asked to wait, while the others are shown to the examination rooms. They are shown which station they will be starting at, then queue in order at the door.
The mock exam coordinator stands where they can be heard by both rooms. There are 1 or 2 helpers in each room to guide the candidates around the circuit. The bell is rung to signify the start of the exam, the end of the 90 second preparation time (the start of each station), 1 minute warning and the end of each station. The candidates who were allocated to start later are brought into the circuit and shown to their station as the ‘end of station’ bell is rung, as this also signifies the start of the 90 second preparation time for the next station.
Marking schemes are collected and sorted during the break. Candidates are not kept separate from examiners, actors or each other. They are advised that they will get the most out of the afternoon if they don’t discuss the stations but this is ultimately their choice.
Examiners and actors return to their stations following the short break and candidates swap between circuits. Following the second round, there is another short break. Actors are excused at this point and most examiners remain for a feedback session. Marking schemes are collected, sorted and distributed to candidates. One room is re-set for group feedback. Examiners are thanked for their time and each is asked in turn for general feedback, hints and tips on their station (figure 3).
Figure 3. Mock CASC timing
11.30pm
Coordinators arrive & set up venue
12.30-12.45pm
Candidates, actors & examiners on site
1pm
Exam starts
2.15pm approx
Break
2.45pm
Exam re-starts - candidates switch circuits
4pm approx
Exam ends
4.30pm
Feedback forms distributed/ group feedback session
5pm
End
Overcoming Potential Problems
Examiners
Our examiners are all either higher trainee or consultant psychiatrists. It is therefore possible that unforeseen circumstances mean that they may have to cancel at very short notice, attend late or not at all, or be called away during the mock exam. Each of these has happened over the past few years but has been easily managed by the extra higher trainee helpers stepping in to examine a station.
Actors
All of our actors have attended as planned. If a paid actor has to cancel at short notice, the agency will find a replacement. If a trainee actor did not attend, one of the higher trainee helpers could step in. Another potential problem which we have encountered is actors not performing as intended. This can be minimised by preparing clear instructions for actors, with examiners providing some direction if needed.
Quality control of stations
It is difficult to know how a station will work until it is used in the mock exam. To minimise the risk of problems, stations are checked and formatted before the exam. They are circulated to actors and examiners in advance to allow time for them to raise any concerns and clarify any uncertainties. Despite following these processes, there are some stations which appear unclear or do not run smoothly on the day of the mock exam. It is helpful to receive feedback from examiners, so that stations can be amended or avoided in future.
Timing
Our mock exam follows a very tight schedule. If the mock exam itself overruns, then fewer examiners may be able to remain to provide feedback. The higher trainee coordinator should keep time carefully during the day of the mock exam, particularly during break times, which are very brief. Support from other higher trainee helpers is essential for this, in working together to collect and organise marking sheets and guiding examiners, actors and candidates to stations. Late arriving examiners remain a potential problem due to their other commitments. We provide a sandwich lunch to our examiners prior to the exam, which helps with prompt attendance.
Venue
The venue we use is quite cramped and can become noisy. We have access to only 14 screens to divide the 16 stations so they are close together and those at the ends of the room are not fully enclosed. The screens are thin so only muffle the noise from neighbouring stations. These issues of space and noise are difficult to overcome. Trainees are warned in advance and aim to focus on their own station as much as possible.
Recruiting actors, examiners and candidates
To date, we have not experienced any problems in recruiting actors, examiners or trainee helpers. We have been able to fill all candidate places and often have a waiting list. If necessary, the mock exam could be run with fewer stations and still provide helpful practice for trainees. If resources were tight, trainees could be role-play actors for some or all of the stations.
Conclusion
We run a local mock exam annually due to continued demand from trainees. It takes significant time and effort to arrange but is good experience for the organiser and local trainees and consultants remain enthusiastic. The mock CASC in the West of Scotland puts a strong emphasis on providing feedback for trainees to work on. The provision of completed marking sheets on the day of the exam and the group feedback session help with this ethos. Our experience has shown that a mock CASC can be delivered locally, at a low cost, while still providing trainees with a realistic exam experience.
Schizophrenia sufferers feel like abstract entities with non-animated bodies, often experiencing auditory verbal hallucinations (AVH) due to morbid “objectification” of inner dialogue.1 From the patient’s perspective, AVHs are a subjective–objective phenomenon. AVH is a non-consensual, dynamic and psychologically charged experience and the voices often echo significant emotions. Derogatory voices are common representations of unconscious self-hatred that cannot stand up to the external world’s logic. Thus, patients need help to incorporate it. Auditory hallucinations may be arising because of an interaction between biological predisposition, perceptual and cognitive factors. According to an integrated model of auditory hallucination (AHs) suggested by Waters et al,2 AHs arise from an interaction between abnormal neural activation patterns that generate salient auditory signals and top-down mechanisms that include signal detection errors, executive and inhibition deficits, a tapestry of expectations and memories. Recently, neuro-quantologists have proposed that AVHs may be an objectification of parallel thinking/quantum thinking.3 Parallel thinking is a source for thought insertion. There may be different variables of AVHs. Experiencing AVH has serious impact on the quality of life of the affected individual, and is a significant factor in prevalence of suicides among schizophrenic patients.4
Incidence
One in four schizophrenia sufferers experiences persistent AVH .5 AVHs are experienced by approximately 53% of schizophrenia sufferers 6 and are present in 28% of major affective disorders (Goodwin& Jamison, . 7 Evidence indicates that each patient responds differently to the voices, according to his/her evaluation of them (Table 1), which influences the degree of interventions. Specific dimensions of AVHs can give hints to the future likelihood of treatment resistance. Although the percentages differ in various studies, it is assumed that about 30% of patients have command hallucinations and they are seen as the ultimate betrayal of the mind. 8 Often, the content of such messages is negative; thus, commanding AVHs are more distressing than commenting ones. Schizophrenia predisposes them to a greater risk of suicides and homicides. Command hallucinations are more prevalent among forensic patients and contribute to their forensic status.
Table1. Patients’ Response to AVH
1.Anxiety and panic feelings 2.Fear 3.Feelings of humiliation 4.Entrapment 5.Self harm thoughts 6.Harm to others 7. Avoidant or withdrawn 8.Shouting and swearing 9.Ritualistic behaviour 10. Substance or alcohol abuse 11. Resistance. 12. Amusements 13. Engagement and courting the voices 14. Appeasement
The multi-factorial polygenic model of schizophrenic disorders has received great support and signifies that genetic factors play a bigger role than environmental factors in familial transmission of these disorders. Relevant studies provide little support for the mechanism of single major locus inheritance. A mechanism involving two, three, or four loci cannot be ruled out even though there is no compelling support for such models.9 It has also been proposed that a single gene may be even responsible for hallucinatory experiences 10 implying that those who have not inherited such a gene may not experience auditory hallucinations, but still could experience other characteristic symptoms of schizophrenia. One may also hypothesise that an individual who has inherited such a “hallucinatory gene” but not all the schizophrenia genes could hear non-clinical voices without having other schizophrenic symptoms. It is also arguable that those who carry such a specific gene are more vulnerable to experience hallucinations when they abuse psychoactive substances and could get misdiagnosed as having schizophrenia, but hallucinations may cease to occur once they abstain from illicit drug abuse.
Measurements for Assessment
AVH is a subjective experience and is hard to measure objectively. Several rating scales are now available for an efficient evaluation of different aspects of voice activities. Some are general and a number of them are specifically designed. Using rating scales facilitates better engagement with patients and helps in reinforcing the message that patients and the distress they experience are carefully considered.
Beliefs About Voice Questions (BAVQ) is an assessment scale useful in measuring the key beliefs about the voices.11 It is typically used in conjunction with the Cognitive Assessment Schedule (CAS).12 Voice Compliance Scale (VCS) is an observer rated scale aimed exclusively at measuring the frequency of command hallucinations and the level of obedience or confrontation with each recognized command.13 Voice Power Differential Scale (VPD) is another measure that can be applied to rate the perceived relative power differential between the voice and voice experience. 14 On the other hand, Omniscience Scale (OS) is intended to quantify the voice hearer’s beliefs about their voices’ knowledge regarding the bio data. 15 Another measure presently in use is Risk of Acting on Commands Scale (RACS), designed to assess the level of risk of acting on commands and the amount of associated distress. 16
The Bonn Scale (BSABS) is used for the assessment of basic symptoms, 17while the Schizophrenia Proneness Instrument (SPI-A) 18and the Examination of Anomalous Self Experience (EASE) 19 are useful aids in identifying minimal changes in subjective experience and for longitudinal monitoring (Table 2). In the extensively used Positive and Negative Syndrome Scale (PANSS), the hallucination item is one of seven in the positive subscale, which also includes delusions, conceptual disorganization, excitement, grandiosity, suspiciousness, and hostility. Given such a great number of scales in use, there is an obvious risk that differential anti-hallucinatory efficacy among antipsychotic drugs may be obscured by means of sum scores for the whole sample in clinical trials.
Table 2. Measurement scales
Beliefs About Voice Questions (BAVQ) Cognitive Assessment Schedule (CAS ). Voice co Voice Power Differential Scale (VPD) Voice Compliance scale (VCS) Voice Power Differential Scale (VPD) Omniscience Scale (OS) Risk of Acting on Commands Scale (RACS) Bonn Scale (BSABS) Schizophrenia Proneness Instrument (SPI-A) Examination of Anomalous Self Experience (EASE) Positive and Negative Syndrome Scale(PANSS)
Treatments
Although many forms of treatments aiming to eliminate AVH or improve quality of life are available, use of medication seems to be the most prevalent. Besides drug treatment, non-invasive physical treatments, such as TMS and different forms of psychological interventions, have recently evolved. Drug therapies are aimed at symptom eradication, whereas psychological therapies tend to foster healing, recovery and personal growth. Rather than being specifically anti-hallucinatory, typically, neuroleptics offer a generalised calming effect and patients are given some “breathing space” to work through their voices. Usage of non-pharmacological tools is needed in the long-term management of refractory cases. Presently, intervention strategies for AVH are based on different models of hallucinations, but regrettably no clear models have been established.
Pharmacotherapy
The current understanding of AVH and the neural mechanisms involved is limited, and knowledge on how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations is inadequate. Consequently, using pharmacotherapy in the management of AVH remains very challenging. 20 Despite multiple trials of different combination and adjunctive therapies to an antipsychotic regime, AVH can remain drug resistant. It is also important to note that all antipsychotics are potentially anti-hallucinatory, even though these effects are usually modest. Moreover, given that, even when medications are effective, concordance can be an issue, antipsychotics should be used prudently and weighed up against effectiveness and side effects (Table 3). There are no clear guidelines for the drug treatment of AVH and comparisons of the efficacy of antipsychotics for AVHs are few. Clinical drug trials very rarely focus on single symptom scores, such as hallucinations, and tend to report group mean changes of overall psychopathology, or at best the positive subscale scores.
Evidences show that AVHs persist in spite of treatment in 32% of chronic patients 21 and 56% of acutely ill patients.22 Trifluperazine was popular as an anti-hallucinatory drug before the advent of atypical antipsychotic drugs. Clozapine is currently favoured for intractable AVHs and is beneficial in 30–60% of unresponsive patients.
Antipsychotic co-treatment is an option for clozapine augmentation. Olanzapine and risperidone may be alternative drugs in first episode psychosis. However, it is being debated whether clozapine should be used in such cases.
Table 3. Drug Treatment
Choice of antipsychotics Cautions and contraindications Titration of dose Switching antipsychotics Assessment of side effects, EPS, TD, Haematological effects etc. Measuring the beneficial effects Assess worsening of symptoms Compliance
Clozapine
Use of clozapine is suggested only after two other antipsychotics have been tried. It works better with continued usage and clinicians have to be patient in its upward titration. At six months, improvement in Global Assessment of Functioning score is significantly higher for clozapine in comparison to other antipsychotic drugs.23 However, when prescribing clozapine, cautions and contraindications must be noted (Table 4).
While higher doses of clozapine may not have more anti-hallucinatory effect, they still carry the risk of inducing the potential side-effects of this highly efficacious drug (Table 5). The most dreaded haematological side-effects are usually manifested within six months. For that reason, during clozapine therapy, patient has to be closely monitored, bearing in mind its limitations in achieving the anti-hallucinatory effects. If higher doses do not have the desired effect, clozapine dose should be titrated downwards to a point of its maximum anti-hallucinatory effect in a particular patient. Such an endeavour can prevent the emergence of serious side-effects, resulting in a complete failure of the therapy. The dose can also be adjusted to a safer level in cases where the psychological interventions are found to be successful. Clozapine can be effective even in lower doses, such as 200 mg/day. Only in the presence of command hallucinations, higher doses should be prescribed to patients whose other positive symptoms are well under control.
Prophylaxis with an antiepileptic drugs, such as lamotrigine or sodium valproate, or similar should be commenced before titrating the dose above 600 mg daily. Close monitoring and active treatment of metabolic dysregulation should be initiated concurrent with clozapine therapy. In clozapine therapy, weighing up safety and superior antipsychotic efficacy and educating the patients on “clozapine lifestyle” is immensely important, as it helps in treating refractory cases of AVH. Thirty percent of patients treated with clozapine may remain unresponsive and clinicians have to lower their expectations to the level of achievement without being cynical. Isolated cases of clozapine-induced joint visual and auditory hallucinations have been reported. 24 In spite of Clozaril treatment having the highest anti-hallucinatory effect, a good percentage of AVH sufferers remain symptomatic and are classed as super-refractory.8 According to Gonzales (2006), 25 50% of patients receiving antipsychotics achieve full remission, while 25% hear voices occasionally and 25% are unresponsive.
Table 4. Cautions & Contraindications of Clozapine
1.Patients with myeloproliferative disorders, a history of toxic or idiosyncratic agranulocytosis or severe granulocytopenia (with the exception of granulocytopenia / agranulocytosis from previous chemotherapy) 2.Bone marrow disorders 3-Patients with active liver disease, progressive liver disease and hepatic failure. 4-Severe CNS depression or comatose state, severe renal and cardiac disease, uncontrolled epilepsy, circulatory collapse, 5. Alcoholic/toxic psychosis and previous hypersensitivity to clozapine. 6.Pregnancy and breast feeding
Table 5. Benefits and risks of Clozapine
Benefits 1. Lower risk of suicide 2. Superior anti-delusional and anti-hallucinatory effects in refractory cases 3. Lower risk of tardive dyskinesia and suppression of TD. 4. Improvement in cognition 5.Higher quality of life and better adherence 6.Decreased relapse 7. Sexual functions unaffected Risks 1.Agranulocytosis 2.Metabolic syndrome 3.Myocardites 4.Chronic constipation and bowel complications 5.Incresed risk of seizure 6.Hypersalivation 7.Abrupt withdrawal cause marked discontinuation symptoms.
Benzodiazepines are often abused by voice hearers aiming to reduce their anxiety. Such patients might benefit by the addition of antidepressant, as this could enhance their mental resources to cope with the voices, even though they have no anti-hallucinatory effects per se. Mood stabilisers are sometimes used to augment the efficacy of antipsychotics without any clinical validation. Despite multiple trials of different adjuvant therapies to an antipsychotic regimen, there have been few promising results. Still, in practice, clinicians may get frustrated, as they struggle to provide symptomatic relief to the voice hearers at any cost. Recently allopurinol, an anti-gout agent has been used as an adjunctive therapy and based on three randomized controlled trials, the result has been encouraging. 26
Psychological Interventions
Persistent AVHs alone may not warrant pointless alteration of medication, as non-pharmacological interventions may achieve some control. When clinicians are not cognizant of non-pharmacological therapies, AVH patients that do not respond to antipsychotics alone may be mislabelled as having refractory AVH. In fact, they are only unresponsive to drug treatment, and could potentially respond to an integrated approach. Similarly, patients with treatment-refractory AVH are often over-diagnosed as suffering from hard to treat schizophrenia, even when other positive symptoms have been ameliorated.
There exists a false dichotomy between physical and psychological treatment approaches to AVH. In practice, both treatment modalities have to be modified in a personalised form. After all, psychiatry was originally known as psychological medicine. Presently, even though different forms of non-pharmacological interventions are available for drug-resistant AVH, some have questionable effects. 27,28,29 (Table 6).
Table 6. Psychological Interventions
1.Cognitive behavioural therapy (C.B.T.) 2. Acceptance and commitment therapy (ACT) 3. Competitive memory training (COMET) 4.Compassionate Mind Training therapy (CMT) 5.Hallucinations focussed integrative therapy (HIT) 6.Midfulness-based therapy 7.Normalizing techniques 8. Enhanced supportive psychotherapy 9. Attention training technique. 10. Distraction techniques 11. Self help approaches
CBT therapists predicate that AVHs are a manifestation of the morbid objectification of inner dialogue (thinking in words),and accordingly verbalised thoughts are the raw material for AVHs.Verbal thinking differs from external speech in many respects and has several distinct features. CBT therapists believe that cognitive dysfunctions underlie AVH and they target them with cognitive remediation strategies. Those experiencing voices commonly think that they are caused by a powerful external agency, and are controlling and potentially harmful. Psychological factors, such as meta-cognitive biases, beliefs, and attributions concerning the origins and intent of voices, also play critical modulatory role in shaping the experience of AVH. Teaching patients to recognize the source of the voices alone has yielded beneficial effects.
Specific techniques have been designed to modify the frequency of AVHs and restore a sense of control over them. Earlier methods involved behavioural approaches based upon addressing hypothesized antecedents and reinforcers of voices and explored a variety of specific interventions such as relaxation training, graded exposure to voice triggers, manipulation of environmental possibilities and even aversion therapy. 30 These behavioural techniques were eventually expanded on by the application of cognitive methods. The primary aim of psychological therapy is to change the belief that voices are omnipotent and uncontrollable and to suppress the associated attributes of false identity, wrong intentions, and urges to harm oneself and others. They encourage patients to challenge irrational interpretations and modify maladaptive behaviour, diverting attention from voices with distraction techniques (Table 7).
Reality testing and behavioural experiments are one form of CBT intervention, based on the view that behavioural changes can prompt cognitive changes. Attention switching can also be used to challenge the belief that hallucinations are uncontrollable. Command AVHs are more prevalent among the forensic population and are more distressing than the commenting ones. The risk of the sufferer acting on them is high when voices are perceived as omnipotent and uncontrollable. CBT has been proven beneficial in tackling command hallucinations. Lack of insight and formal thought disorder may not necessarily disqualify CBT for AVH; nonetheless, negative symptoms may pose a barrier to this form of psychological intervention. The current model of CBT for psychosis has been criticized, suggesting that it is simply an extension of general CBT concepts without taking into account the specificities of psychosis. 31 Patients with higher intelligence, who have the ability to grasp abstract concepts, might gain greater benefits from CBT. 32
Table 7. Goals of CBT
1. Change false beliefs about AVH . 2. Challenge irrational interpretations. 3. Modify maladaptive behaviour – e.g. fear of the voices or hiding from them. 4. Divert attention, using distraction techniques. 5. Build and maintain treatment strategies 6.Develop cognitive behavioural strategies 7.Develop newer understanding of hallucinatory experience 8. Address negative self-evaluation.
Combining psycho-education and supportive psychotherapy has been found to enhance the functioning and self-esteem of voice hearers, providing a therapeutic structure. In the increasingly popular self-help voice-hearing groups, the ethos of recovery is understanding, accepting and integrating the sufferer’s turmoil. Acceptance and non-judgemental support of people with similar experiences has helped many patients cope with the condition. In response, the number of books on AVH, aiming to educate the sufferers and carers, has grown considerably.
Newer Psychological Interventions
Attention Training Technique (ATT) focuses on negating psychological distress through cognitive and meta-cognitive modification. 33,34 Patients focus on up to five neutral sounds, such as a dripping tap, before switching their attention between different sounds. They then practise listening to all the sounds simultaneously. After a few weeks, they focus on neutral sounds and then on the AVH. Once this process is mastered, they switch attention between voices and other sounds, before being asked to divide their attention among them. This exercise continues for several weeks, whereby the aim is to replace the self-regulatory process with new processing configurations.
Acceptance and commitment therapy (ACT) is aimed at achieving psychological flexibility. It incorporates mindfulness and acceptance, considering AVH as a private experience and asserting that patients experience distress only when they try to deafen the voices. . By reducing struggle with voices and engagement with them, key responses such as arousal, attention and activation of brain areas are hypothesised to be reduced. 35 The ideas behind ACT are consistent with the emphasis on the recovery movement of finding a way to live a valued life despite the ongoing problems. To this effect, unique and effective coping strategies are offered, whereby patients are given the insights that parts of the self are behind the voices. Thus, accepting them means sending a loving message of compassion, acceptance and respect to oneself Two randomised control studies have yielded promising results. 36,37ACT follows a set manualised structure, rather than relying upon the complex and lengthy process of belief modification: therapy can be much briefer and potentially practiced by a wider range of clinicians and cost effective. 38
There are verbal and non-verbal routes to emotions. As CBT uses the former in voice therapy, it is less effective when patients are negatively involved with the voices. On the other hand, Competitive Memory Training (COMET) uses the non-verbal route. Generally COMET sessions involve four stages.39 A. identification of aspects of negative self-esteem reinforced by the voice; B. retrieval and re-living of memories associated with positive self-esteem; C. positive self-esteem is brought to compete with the content of the voices to weaken associations between voice content and negative self-valuation; and D. learning to disengage from the voices and to accept the voices as psychic phenomenon.
The significant past comes back to the conscious mind in AVH, as life experiences charged with emotion make a compelling impression on the brain. The observation that voices are knowledgeable about patients suggests that auditory hallucinations are linked to memory. In other words, negative experiences from memories evoke emotions, which should be deactivated. Distancing and decentring techniques could help patients to interpret voices as false mental events. As a result, incompatible memories could become tools to modify the power of voices—they are deactivated by new learning. Thus, when voices torment hearers, telling them that they are failures, a competing memory of such success as passing a significant examination is introduced. Posture, facial expression, imagination, self-verbalisation and music are all procedures included in the COMET protocol.40
Compassionate mind training (CMT) is used to encourage better resilience to unpleasant commenting voices.CMT involves practicing exercises which promote self-compassion and compassion towards others. It is targeted to activate brain systems involved in social and self-soothing that amend threat systems active when experiencing unfriendly voices. 41
Mindfulness is paying attention in a particular way – on purpose, in the present moment and non-judgementally. Clinical literature cautions against use of meditation in psychosis, but the effectiveness of mindfulness-based approaches for people with psychosis has been demonstrated in controlled clinical settings. 42and in the community. 43 Abba et al. 44 argue that effectiveness of mindfulness is a three-stage process: a. Becoming knowledgeable and developing more awareness of psychotic experiences and observing the thoughts and emotions that follow them. B. Permitting psychosis to come and go without reacting in order to cultivate understanding that distress is produced by the meanings one attaches to thoughts and sensations. C. Becoming autonomous by accepting psychosis and the self by acknowledging that the sensations only form part of the experience, and are not a definition of the self.
Neuroimaging studies are beginning to explain the neural mechanisms of how mindfulness might be working clinically. Structural changes have been observed in the anterior cingulate cortex, which is an area of the brain associated with emotional regulation. 45 . There is evidence to suggest that mindfulness practice is correlated to reduced brain activity in the default mode network. 46
Limited improvements with mono-therapy have prompted the development of multi-modular approaches. Hallucination-focused Integrative Therapy (HIT) is geared towards integrating CBT with neuroleptics, coping strategies, psycho-education, motivational intervention, rehabilitation and family treatment.47 Extant studies indicate that integrated treatment is more effective than TAU (treatment as usual).
The continuum model of psychosis and ordinary mental events has incited the development of normalisation of the voice hearing experience.48 Most psychiatric symptoms occur in normal people—just as breathlessness and palpitations occur while exercising—but are potential clinical symptoms. It is the frequency and duration that determine the borderline. According to the cognitive model, an internal mental event receives external attribution. Through normalisation, the external attribution can be reversed.
Although drug treatment may be the most practical way of managing AVH, refractory cases pose formidable challenges and it must be emphasized that psychological treatments are not counterproductive if applied skilfully. Clinicians who espouse the view that psychosis is a medical condition dismiss the usefulness of psychological interventions. The counter argument would be that a patient with a medical condition, such as stroke, benefits from physiotherapy, occupational therapy and psychological approaches.49
Repetitive Trans-cranial Magnetic Stimulation
There are several ongoing trials in which the aim is to treat refractory AVH (Table 4). Repetitive transcranial magnetic stimulation (rTMS) is thought to alter neural activity over language cortical areas. Several studies on rTMS have shown improvement in the frequency and severity of AVHs, albeit without offering any strong conclusive evidence for its efficacy .50 Moderate rates of AVH attenuation following rTMS have been noted in three meta analyses. Given that remarkable improvements in isolated cases have also been claimed, this suggests that rTMS may be appropriate mode of therapy for some patients.
Available data suggest that .rTMS selectively alters neurobiological factors that determine the frequency of AVH. However, a recent meta-analysis indicated that the effect of rTMS may be short-lived (less than one month).51 Studies seem to indicate that rTMS may be effective in reducing the frequency of AVHs, with little effect on their other topographical aspects.50,52 Sham stimulation has also led to improvements in a number of AVH parameters. Compared to bilateral stimulation, rTMS of left tempero-parietal region appears more effective in reducing the AVH frequency . 53 To reduce the distress associated with AVH and help patients to cope with hallucinatory predilection, rTMS could be combined with CBT. The side-effect profile is much cleaner for this biological approach when compared to medications. Still, like any other anti-hallucinatory treatments, neuro-stimulation technique does not guarantee long-term elimination of AVH. Electroconvulsive therapy (ECT) is considered a last resort for treatment resistant psychosis. Although several studies showed clinical improvement, a specific reduction in hallucinations severity has never been demonstrated.
Avatar Therapy
Computer-assisted voice therapy is a budding form of computerised psychological treatment that is currently undergoing trials.28 In this novel therapy, persecutory voices are directly depowered with the aid of a computerised dummy of the alleged persecutor through voice dialogue. Analytically-oriented therapists can even converse with “voices” and such committed clinicians will find computerised voice therapy as another helpful tool. It is hard to ascertain whether the benefits of the avatar therapy were due to the specific technique involved or simply the increased attention and care, and Leff’s team acknowledged the limitations of their work.54
Sound Therapy
Another important development in voice therapy is the use of tinnitus control instrument (TCI)—a form of sound therapy—in treating refractory AVH. Similar to AVH, subjective tinnitus is defined as the false perception of sound in the absence of acoustic stimuli. Even though their definitions are similar, the origin and underlying causes of these two conditions differ. Tinnitus is characterised by a simple sound—a monotone—and is non-verbal. In tinnitus, the brain is believed to interpret an internally generated electromagnetic signal as an acoustic sound or sounds.
Kaneko, Oda, and Goto29 reported successful intervention in two cases of refractory AVH with sound therapy, using tinnitus control instrument (TCI) alongside antipsychotic medications. They posited that, in sound therapy, the auditory system is directly helping the limbic nervous system and the neuro-mechanism for AVH is sensitive to sound therapy .55 They concluded that low-level auditory stimulation might be hindering the progression of voices and brain might be getting a breathing space to initiate self-healing process.
Future Directions
Hallucinogenic drugs, anti-hallucination medications and neuroimaging studies may lead to a better understanding of AVH. Animal models of hallucinations and pharmacogenetics might help to find more efficacious anti-hallucinatory drugs. AVHs themselves may have a genetic origin.10 Thus, not all patients that develop schizophrenia would experience AVHs. Such a finding might help shed more light on the genetics-linked mechanism and remedial measures of hallucinations in schizophrenia. Because the biological substrates facilitating drug effects on hallucinations remain largely unidentified, future studies with translational designs should address this important issue to find a more targeted drug treatment of psychosis.56
If a derivative of clozapine without the haematological side-effects is formulated in the future, it might be an important milestone in the treatment of refractory AVHs and schizophrenia because clozapine has the most potent anti-hallucinatory effect. Such a novel drug could become the first line of treatment for schizophrenia, as it would address many of schizophrenic symptoms at their onset. This is crucial, as symptoms and habits become stronger and more resistant the more frequently they occur. Fatty acid amide derivatives of clozapine, such as DHA-clozapine, are found to have better pharmacological properties and enhanced safety. However, such claims are awaiting substantiation in clinical trials.57 Thus, more attention needs to be directed into this potentially rewarding research arena. It is, however, very likely that, even after a better pharmacological cure is found for AVHs, a few symptoms might linger on for long periods. With this view, efficient non-pharmacological remedies have to be designed to deal with such residual symptoms.
Discussion
Medications help reduce the psychic pain, and protect the dignity of patients, as well as prevent suicides and homicides. From the patient’s perspective, the calming and relaxing effects of pharmacological therapies are a priority for relief from the distress due to AVH. Among the antipsychotics, clozapine has the maximum anti-hallucinatory effect and it is a shame that it cannot be used as a first line treatment choice. Treatment of AVH should be individualistic and clinicians should be prepared to apply several clinical and non-clinical approaches in concert to address them.
More research into the aetiology and mechanism of AVH is warranted in order to find effective treatment strategies. There is no shortage of theories about the mechanism of AVH, but there is no consensus among the investigators. It is unlikely that AVH is a pure neuro-chemical experience or a biological glitch, and this is where the currently available drug treatments fail. The distinction between primary and secondary symptoms was lost with the triumph of biological psychiatry in the last century. Thus, some authors presently claim that AVHs may even be a secondary symptom or a neuroquantological manifestation of an underlying biological disorder. We should not minimise the importance of eliminating symptoms when such symptoms are incapacitating, as in the case of hallucinatory experiences.
The present recovery model that emphasises and supports the patient’s potential for recovery involving hope, supportive relationship, empowerment, social inclusion, coping skills and meaning cannot be achieved without the help of psychological interventions. In this respect, CBT is a useful tool in the rehabilitation of psychotic patients with residual AVH. Jauhar et al.58 argued that the effectiveness of CBT in schizophrenia is influenced by failure to consider sources of bias. Consequently, the benefits are more apparent than real, prompting the question of whether CBT has been oversold.49 The verdict of Maudsley debate on the issue has been that CBT has not been oversold, but rather has a great impact on symptom reduction and enhancing concordance and insight. Perhaps the most informative trial so far accomplished has been the work on cognitive therapy for command hallucinations, which has proven the benefit of specific model development, and which productively, combined measurement of process and a targeted outcome.59
There is ample evidence suggesting that a combination of family and psychological interventions, alongside pharmacotherapy, can be the most effective way of dealing with refractory AVH.60 The inner dialogue hypothesis of AVH held by CBT therapists has its opponents.61 Patients respond to the voices they experience by utilising inner speech. Some observations with corresponding features weaken the inner-dialogue hypothesis. David and Lucas have demonstrated in a single case study that short-term maintenance of phonological representation (inner dialogue) may co-exist with AVHs – they are not synonymous experiences. The cost-effectiveness of psychological interventions is poorly studied, despite being highly relevant for policy decisions in healthcare.
Computerised voice therapy works better with technically minded, highly intelligent patients. In contrast, individuals of low to average intelligence may require a primarily behavioural approach, with limited efforts to understand concepts, such as automatic thinking and schema. Unlike sound therapy through music playing instruments (iPad, iPod, iPhone, etc.), TCI causes no disruption to daily functioning and can be used continuously. Computerised voice therapy and sound therapy warrant standardised case trials. When it comes to treating AVHs, optimizing compliance, reducing the burden of symptoms, and improving control, quality of life and social functioning should be the therapeutic goals.62 Neuroquantological views of AVHs63 explain the limitations of pharmacotherapy and the usefulness of psychological interventions.
Tuberculosis is an important and major infectious disease worldwide, especially in developing countries with an estimated global case fatality rate of 13% in 2007. The World Health Organization estimated that there were 13.7 million prevalent cases of TB infection worldwide, with each year bringing about 9.27 million new cases, 44% of which are new smear-positive cases1. Musculoskeletal tuberculosis is rare, chest wall tuberculosis is rarer and involvement of costochondral junction is among the rarest forms of tuberculosis. Tubercular costochondritis usually presents with insidious onset non-specific pain and swelling, resulting in delay in the diagnosis. Diagnosis is usually made by typical radiological findings and microbiological and histological evidence of tuberculosis. Treatment consists of antitubercular therapy with or without surgery.
Case report
30 year male, smoker, from low socio economic status presented with history of low grade fever, malaise and anorexia which began gradually two months back. For about one month he had pain in right side of chest just adjacent to lower part of sternum. The pain had started insidiously, gradually worsened with time, and was dull and aching in character. Pain was localized to the right lower parasternal area, occasionally radiating to the back. The pain was aggravated by physical activity and deep inspiration and was relieved to some extent by ordinary anti- inflammatory medications. There was significant history of pulmonary tuberculosis in the patient’s sister 1 year ago. There was no history of cough, haemoptysis, fever with chills or history of tuberculosis in the past.
On general physical examination, the patient was weak and febrile. On local examination, there was a bulge in the right lower parasternal area, corresponding to the right 9th costochondral junction. On palpation, there was severe tenderness in the same area. There was no peripheral lymphadenopathy and abdomen was soft and non-tender with no organomegaly. Chest examination was unremarkable. The rest of the examination was normal.
In terms of investigations, the chest radiograph, ECG, haemogram, kidney function tests and liver function tests were normal. ESR was high (34), Mantoux test was positive(15mm). Sputum for AFB was negative. Axial CT chest demonstrates expansion of the left 9th costal cartilage with soft tissue thickening on both the inner and outer aspect of the cartilage (Fig.1). FNAC (Fine Needle Aspiration Cytology) of the costochondral junction revealed Mycobacterium tuberculosis and on culture and histopathology of aspirated material revealed tubercular granuloma (Fig.2). A final diagnosis of tubercular costochondritis was made and the patient was treated with anti tubercular drugs for 9 months. Patient's symptoms improved after 2 weeks of treatment and swelling and tenderness subsided. Post treatment axial CT demonstrated complete resolution of soft tissue abnormality previously seen around the costal cartilage (Fig.3).
Figure 1: CT chest showing evidence of costochondritis.
Figure 2: Typical tubercular granuloma with central caseous necrosis on histopathology.
Figure 3: Post ATT CT showing complete resolution of costochondritis.
Discussion
Tuberculosis is very common infectious disease in developing countries like India, resulting in significant morbidity and mortality. Musculoskeletal tuberculosis is relatively uncommon and accounts for 1 to 2% of all the tuberculosis patients2,3 and accounts for about 10% of all extrapulmonary TB infections4. Tuberculosis of the chest wall constitutes 1 to 5% of all cases of musculoskeletal TB5,6. TB abscesses of the chest wall are usually seen at the margins of the sternum and along the rib shafts, and can also involve the costochondral junctions, costovertebral joints and the vertebrae7. In one study8, on the basis of the part of the rib involved, the roentgenographic findings for patients with rib tuberculosis was divided into four categories: Costovertebral (35%), Costochondral (13%),Shaft (61%), and Multiple cystic bone. TB of the thoracic wall is usually caused by reactivation of some latent foci of primary tuberculosis formed during hematogenous or lymphatic dissemination but direct extension from contiguous mediastinal lymph nodes can also occur9. In developing countries such as ours, tuberculosis is endemic and all the rare forms of the disease have been described, but in developed countries, resurgence of tuberculosis due to HIV may be responsible for atypical presentations .
Thoracic wall tuberculosis mostly presents insidiously with pain and swelling, but the diagnosis of chest wall TB is often delayed due to lack of specific symptoms and signs and gradual course10. Approximately less than 50% of chest wall TB patients may have active pulmonary TB11,12. Imaging techniques like X-Rays and CT scan play an important role in diagnosis and follow up of these patients. According to a study done by Vijay YB et al13, radiological signs may not be present initially at the time of presentation with symptoms, abscesses or sinuses may be present much before the imaging modalities detect them. Computed tomography (CT) scan is more valuable for localization and detection of bone destruction and soft tissue abnormalities. Atasoy et al demonstrated the role of Magnetic Resonance Imaging (MRI) for early detection of marrow and soft tissue involvement in sternal tuberculosis due to high contrast resolution of MRI14.Diagnosis is usually confirmed by finding acid fast bacilli (AFB) and positive AFB cultures of bone (positive in up to 75% of cases), and caseous necrosis and granuloma on histopathology11.
Complications of thoracic wall tuberculosis include secondary infection, fistula formation, spontaneous fractures of the sternum, compression or erosion of the large blood vessels, compression of the trachea and migration of tuberculosis abscess into the mediastinum, pleural cavity or subcutaneous tissues as discharging sinus. Chest wall TB needs to be differentiated from benign and malignant tumors [chondroma, osteochondroma, fibrous dysplasia, lipoid granuloma, chondrosarcoma, myeloma multiplex]11, metastatic carcinoma, lymphoma or other kinds of infection15,16.
The treatment of choice of chest wall TB is still not clear. Whether antitubercular therapy alone or surgical debridement (or excision based on lesion extension) combined with antitubercular therapy should be done needs further studies. But the general rule is if there are any complications, surgery is the treatment of choice followed by antitubercular therapy. We conclude that the diagnosis of thoracic wall tuberculosis is a challenge for physicians and is suspected by gradual onset clinical features and confirmed by microbiology, histopathology and radiography findings. Early diagnosis and treatment are important to prevent complications caused by bone and joint destruction.
Supporting Professional Activities (SPA) time is a part of each consultant’s new contract. When the new consultant contract evolved in 2003, a suggested breakdown of the week was 7.5 sessions (1 session equates to 4 hours) for direct clinical care (DCC) and 2.5 sessions for SPAs.1 This was driven by the need for consultants to continue their own professional development (CPD) as well as having the time for input into the development of trainees and medical students.
Examples of CPD work for consultants could include audit for improvement of service or patient care, teaching of patients, medical students or trainees, research, publications, aspects of hospital management and involvement in simulation courses e.g. Advanced Life Support (ALS)/Advanced Paediatric Life Support (APLS).
The General Medical Council (GMC) requires that during annual appraisals, doctors should use supporting information to demonstrate they continue to follow “Good Medical Practice”. This mandates that doctors should ‘take part in educational activities that maintain and further develop’ their competence and performance.1 With regard to revalidation, the GMC states you will have to demonstrate that you regularly participate in these activities; at Annual Review of Competency Progression (ARCP) it is imperative that accurate records of these CPD activities are presented at the annual job plan review.2
It is clear, therefore, that the provision of allocated time during the working week to complete these aspects of work life are deemed necessary for consultants. The Royal College of Anaesthetists and Association of Anaesthetists of Great Britain and Ireland both support the original view that a consultant should “typically” have 2.5 SPAs in their contract (though this would have to be subject to individual workload). With the demands of service provision it is clear that consultant SPAs are under threat, with around 40% of new consultants offered jobs with fewer SPA sessions than are thought necessary to allow sufficient non-DCC work.3
Since trainees are subject to similar appraisal and development requirements, we wonder if trainees should be allocated SPA time? For progression through training years and to pass the ARCP, it is necessary to provide evidence of trainee development within clinical practice in a similar way to consultants. This can involve a great deal of extra time. Once (notoriously difficult) exams have been passed, each trainee must go through the application process and prove what skills they have assimilated during their training to date. In fact, the ST3 anaesthesia application criteria states that the following are some of the ‘desirable’ criteria that require evidence:
Relevant academic and research achievements
Involvement in an audit project, or quality improvement project
Interest and commitment to the specialty beyond the mandatory curriculum
Evidence of interest in, and experience of, teaching
Instructor status in an advanced life support course (ALS, APLS)
Involvement in management…and understanding of management
Effective multi-disciplinary team working and leadership
Effective leadership in and outside medicine
Achievement outside medicine
Altruistic behaviour, e.g. voluntary work
The list is extensive and clearly requires a lot of time and input outside of the normal working week. With the expectation that trainees should be prepared to move straight from CT2 to ST3 (assuming their exams are completed), these desirable criteria must be addressed alongside completing other mandatory aspects of training such as, for anaesthesia: an Initial Assessment of Competency (IAC), an Intensive Care Unit (ICU) module, an Initial Assessment of Obstetric Competency (IAOC) and 15 Units of Training. With all these challenges between a core anaesthetic trainee and potential specialist anaesthetic training, there seems little time to complete an adequate number of the desirable criteria; this is a compelling reason to facilitate some time into the trainee contract to help produce more well rounded trainees.
However, therein lies the challenge - anaesthetic training is such a busy programme. Trainees are involved with multiple areas within a hospital such as ICU, theatre work or Obstetric Delivery Suite that they must learn and practice a wide range of skills to demonstrate the proficiency expected of a consultant anaesthetist. With experience of clinical work already at a premium due to European Working Time Directive hours, creating a good teaching environment whilst providing service provision is a hard enough task. It might seem difficult, therefore, to justify taking away yet more clinical time for trainees.
The proposed “7 day National Health Service (NHS)” contract could also pose more difficulties. Current example rotas released by NHS Employers demonstrate an increased likelihood of shift-work for a typical ICU rota.4 This shows trainees will be working more weekends and nights than at present, which could reduce the time spent directly with consultants. This would make introducing more non-DCC work difficult to justify as it would likely occur during daylight hours – when training could occur.
How it could be introduced:
Assuming SPA sessions for trainees were implemented, there would also be practical aspects to address. For example, how many SPA sessions to allocate for each level of trainee and how to monitor that this time was spent effectively and efficiently.
Monitoring:
Trainees could propose which aspects to focus on during their SPA sessions, such as management, teaching, quality improvement projects or more time in their sub-specialty interest. The goals could then be set at the initial educational supervisor meeting, much like a Personal Development Plan (PDP), and monitored throughout the year. This would give focus to any SPA time and ensure it is effectively used. If a trainee abuses their time or is not using it appropriately then removal of SPA time could be enforced. This would give the trainees more ability to improve the skills so often considered additional to trainees.
Funding:
In times of NHS austerity, funding would also need addressing. Potentially neither Deanery nor Trust might be willing to pay a doctor for days spent working outside the hospital workload – such as in educational roles or as a college tutor.5 A trial in one single deanery could assess the efficacy of such a scheme.
A possible solution would be to remove a few days of study leave allowance, as many trainees do not use their whole entitlement, and re-assign these to SPA time, allowing a trainee more flexibility. Trainees could initially start with fewer SPA sessions when more junior, to allow more clinical time, increasing SPAs to one per week for intermediate or higher trainees who may well be approaching their Certificate of Completion of Training (CCT).
Conclusion:
There are some practical difficulties in establishing trainee SPA sessions and no doubt many feel all contracted time should be spent practicing anaesthesia. However, given the changing way trainees are recruited via a ‘tick-box’ national application system, together with the variety of non-clinical skills expected by consultancy such as an ability to teach, conduct audit work, engage in managerial roles etc., a small change in the training set-up could produce more rounded trainees, benefitting anaesthesia in general and training programmes in the future.
Communicating effectively with patients and families is a vital component of high quality care. Studies have shown that patients feel excellent communication in consultations is vital for instilling confidence in the medical practitioner.1,2 There is evidence that doctors are not meeting this fundamental need.3
Some consider it an ethical obligation of doctors to balance their training needs against providing optimal care for patients. It is well known that junior trainees have significant level of performance anxiety that translates through to their consultations.4
Simulation based training is now an integral part of postgraduate curriculum in training a variety of medical specialties in managing acute scenarios. As an education method it provides a controlled environment in which it is safe to learn from errors,5 and improves learner outcomes.6 Simulation has been shown to be a valid approximation of true clinical practice.7 It therefore allows trainees to reach higher levels of proficiency prior to embarking on real patient encounters.
Current Core Medical Trainees (i.e. junior doctors who have embarked on the first stage of physician training within England) in the London deanery are expected to be able to manage complex communication scenarios effectively prior to entering specialty training. This is demonstrated by requirements set out in the Core Medical Curriculum, as detailed in Box 1. Whilst significant emphasis is placed on communication skills training in basic scenarios at a medical student level, there is very little formal postgraduate communication skills training within this deanery and others.
Box 1: Excerpts from Core Medical Trainee curriculum
Counsel patients, family, carers and advocates tactfully and effectively when making decisions about resuscitation status, and withholding or withdrawing treatment
Able to explain complex treatments meaningfully in layman's terms and thereby to obtain appropriate consent even when there are problems of communication and capacity
Skillfully delivers bad news in any circumstance including adverse events
This deficit in training led us to conduct a survey exploring Core Trainees’ views regarding communication skills training in the London deanery. Findings from the survey are detailed in Box 2.
Box 2: Results from Core Trainee Survey
83% received less than 2 hours of post-graduate training in communication skills since the start of Core Medical Training
Only 50% felt somewhat competent in engaging in difficult communication scenarios
88% reported significant challenges when conducting these discussions. They have had difficult on-calls experiences relating to communication difficulties
100% displayed interest in attending further Simulation Training in advanced communication skills
Method
We devised a pilot project using simulation to develop trainees’ competencies in advanced communication skills. After application to our local training board, we secured funding to run a number of sessions for core medical trainees within the London area.
The objectives of our pilot project were to provide experience of realistic communication based scenarios in a structured and safe environment to core trainees; provide feedback on trainees’ communication styles and offer suggestions for improvement; improve confidence of trainees in difficult communication situations.
Each session was conducted in an afternoon session and candidates were divided into three groups of three trainees who would remain together for the entire session. We ran four sessions, with a total of 36 trainees. Each group was facilitated by a consultant or a higher trainee in either elderly or palliative care medicine, given our focus on resuscitation/end of life discussions and assessment of capacity. We employed three actors to rotate around each group performing a variety of roles including patients and relatives. With a total of six scenarios, each trainee had the opportunity to participate in at least two scenarios lasting approximately 15 minutes, with feedback thereafter for approximately 10 minutes.
The scenarios employed were based on personal experience of regularly occurring, challenging communication situations encountered in our own clinical practice. We created detailed scripts for the actors as well as corresponding clinical vignettes for the candidates.
The scenarios were:
End-of-life discussion with a challenging family regarding a patient with end-stage dementia.
Discussing resuscitation with a family opposed to do not attempt resuscitation (DNAR) regarding an acutely unwell patient with poor functional baseline.
Discussing resuscitation with a young patient with metastatic cancer undergoing palliative chemotherapy who has little understanding of the terminal nature of the disease.
Assessing mental capacity regarding discharge planning in a patient with mild to moderate dementia.
Assessing mental capacity regarding treatment in a patient with moderate learning difficulties.
Assessing mental capacity in a medically unwell patient with mental health issues who wishes to self discharge from the ward.
Box 3 outlines the session structure.
Box 3: Timetable for the session
12.30-12.45:
Actors briefing
12.45- 13.00:
Facilitators briefing
13.00-13.30:
Core trainee briefing
13.30-14.45:
Scenarios 1-3 in small groups
14.45-15.00:
Tea/Coffee break
15.00-16.15:
Scenarios 4-6 in small groups
16.15-16.45:
Feedback and closing
Results
Written feedback was obtained from all participants by distributing a post-course evaluation form, with a 100% response rate. A number of areas were assessed via a Likert scale of 1 – 5, with 1 being ‘not at all’ and 5 being ‘very much’. 100% of trainees felt the content was useful and their knowledge/skills had increased. 100% felt more confident after the session and all trainees and facilitators felt this would be beneficial for medical trainees. A full breakdown of results is detailed in Table 1.
Table 1: Results from post-course feedback
The post-course feedback form allowed for free text feedback from participants, with some individual examples given below:
“Realistic scenarios - good opportunity to experience them and get feedback in a safe environment, good practice of common communication problems”
“It builds confidence in dealing with these situations and provides basis for building up ”
“This work dealt with complicated cases and actors were not too easy which I liked. Good and unforgettable”
Discussion
With the European Working Timing Directive and resulting shorter working hours, gaining proficiency in a number of key skill areas is limited due to reduced patient encounters. A recurrent complaint among core medical trainees is the lack of observed clinical encounters that leads to individualised feedback.
Feedback from more experienced speciality practitioners was only one component of our attendees learning experience. They also benefited from personal practise in a non-threatening environment, observation of their colleagues communication styles and finally learning through reflection with their colleagues
This innovation has shown a clear benefit in amplifying the confidence and preparedness of our core medical trainees in approaching these higher level communication scenarios. Future directions include introducing quantitative assessments pre- and post- course to objectively demonstrate improved confidence and performance. Providing the course to trainees in other specialties as well as across the multidisciplinary team would also be beneficial given the universal requirement of healthcare professionals to communicate skilfully.
Ventilator-associated pneumonia (VAP) is a type of nosocomial pneumonia that occurs in patients who receive mechanical ventilation and is usually acquired in the hospital setting approximately 48–72 hours after mechanical ventilation.1 VAP is one of the most frequent hospital-acquired infections occurring in mechanically ventilated patients and is associated with increased mortality, morbidity, and health-related costs. Several risk factors have been reported to be associated with VAP, including the duration of mechanical ventilation, and the presence of chronic pulmonary disease, sepsis, acute respiratory distress syndrome (ARDS), neurological disease, trauma, prior use of antibiotics, and red cell transfusions.2 VAP occurrence is closely related to intubation and the presence of the endotracheal tube (ETT) itself.
Since there are inadequate objective tools that are utilized to make an assessment of bacterial-induced lung injury in a heterogeneous group of hosts, the diagnosis of VAP is challenging. Around 90% of ICU-acquired pneumonias occur during mechanical ventilation, and 50 % of these ventilator-associated pneumonias begin in the first 4 days after intubation.3 VAP has a cumulative incidence of 10-25% and accounts for approximately 25% of all ICU infections and 50% of its antibiotic prescription, making it the primary focus for risk-reduction strategies.1,4 For all these reasons, early diagnosis and prevention of VAP has held a prominent position on the research agenda of intensive care medicine in the past 25 years, with an ultimate goal of improving patient outcome, preferably by reducing mortality.
The keywords, ‘ventilator-associated pneumonia,’ in PUBMED revealed a total of 3612 titles and 625 review articles within the search limit of 10 years, between 2005 and 2014. Only articles in English were chosen.
PATHOGENESIS
Understanding the pathogenesis of VAP is the first step in the formulation of its appropriate preventive and therapeutic strategies. The initial step in the pathogenesis of VAP is bacterial colonization of the oropharynx and gastric mucosa, followed by translocation of the pathogens to lower respiratory tract. The most common means of acquiring pneumonia is via aspiration which is promoted by supine position and upper airway and nasogastric tube placement.2,5 In a mechanically ventilated patients, aspiration occurs around the outside of the endotracheal tube rather than through the lumen. Secondly, aerobic Gram-negative bacteria presumably reach the lower airway via aspiration of gastric contents or of upper airway secretions. Other means by which VAP can be acquired include aspiration from the stomach or nose and paranasal sinuses. Figure 1 depicts the essential elements favoring colonization of lower respiratory tract with the bacterial pathogens with subsequent development of pneumonia.2,5,6
Figure 1: Pathogenesis of Ventilator-associated pneumonia5 *Gastric alkalinization; prior antimicrobials; ICU stay; intubation; supine position; circuit/airway manipulation and mishandling; device cross-contamination; sedation; diminished cough reflex; and malnutrition predispose to colonization and aspiration. As the duration of ICU stay increases, colonization with MDR Gram-negative pathogens like Pseudomonas and Acinetobacter increases. †Via contaminated nebulizers/aerosols Reproduced with permission from the publisher.
COMMON CAUSES
The specific microbial causes of VAP vary widely depending in epidemiological and clinical factors. Common pathogens include aerobic gram negative bacteria such as Pseudomonas aeruginosa and members of family Enterobacteriaceae, staphylococci, streptococci, and Haemophilus species. Microorganisms like Pseudomonas spp., Acinetobacter spp. and Methicillin-Resistant Staphylococcus aureus occur commonly after prior antibiotic treatment, prolonged hospitalization, mechanical ventilation or when other risk factors are present.6,7
Moreover, deliberated ill patients may have defect in phagocytosis and behave as functionally immunosuppressed even prior to emergence of nosocomial infection as seen by many recent studies.8,9
DIAGNOSIS
Clinical Diagnosis
No gold standard of diagnosis for identifying VAP is there inspite of variety of proposed definitions. VAP has traditionally been diagnosed by clinical criteria of Johanson and colleagues (appearance of new or progressive pulmonary infiltrates, fever, leucocytosis and purulent tracheobronchial secretions), which are non-specific. When findings on histologic analysis and cultures of lung samples obtained immediately after death were used as references, a new and persistent (>48-h) infiltrate on chest radiograph plus two or more of the three criteria (i) fever of >38.3°C, (ii) leukocytosis of >12 × 109/ml, and/or (iii) purulent tracheobronchial secretions had a sensitivity of 69% and a specificity of 75% for establishing the diagnosis of VAP.10
Because of the poor specificity of the clinical diagnosis of VAP and of qualitative evaluation of ETAs, Pugin et al. developed a composite clinical score, called the clinical pulmonary infection score (CPIS), based on six variables: temperature, blood leukocyte count, volume and purulence of tracheal secretions, oxygenation, pulmonary radiography, and semi-quantitative culture of tracheal aspirate. The score varied from 0 to 12. A CPIS of >6 had a sensitivity of 93% and a specificity of 100%.11 Accuracy of CPIS in diagnosis of VAP is debated, despite of its clinical popularity. In one meta-analysis study evaluating the accuracy of CPIS in diagnosing VAP reported pooled estimates for sensitivity and specificity for CPIS as 65 % (95 % CI 61-69 %) and 64 % (95 % CI 60-67 %), respectively.12 The poor accuracy of clinical criteria for diagnosing VAP is due to purulent tracheobronchial secretions in patients receiving prolonged mechanical ventilation which are rarely caused by pneumonia. Moreover, in pneumonia systemic signs such as fever, tachycardia, and leukocytosis are nonspecific; they can be caused by any state that releases the cytokines interleukin-1, interleukin-6, interleukin-8, tumor necrosis factor alpha (TNFα), and gamma interferon.13,14 The weak point of CPIS is probably the inter-individual variability (kappa= 0.16), since a subjective evaluation is required when we are judging the quality of tracheal secretion (purulent/not purulent) and the presence of infiltrate at chest ray.15
Radiologic Diagnosis
Radiographical evidence of pneumonia in ventilated patients is also notoriously inaccurate. In a study of autopsy proven VAP, of the total population, only air bronchograms correlated with pneumonia and no specific roentgenographic sign correlated with pneumonia in patients with adult respiratory distress syndrome. The differential diagnoses of VAP based on radiographical appearance, include adult respiratory distress syndrome, congestive heart failure, atelectasis, pulmonary embolism and neoplastic infiltration.16
Microbiologic Diagnosis
The type of specimen that should be obtained for microbiologic processing as soon as VAP is suspected is another area of importance. The use of quantitative cultures is one of the main issues for any diagnostic laboratory because there is oropharyngeal bacterial contamination of all respiratory secretion samples, despite this is not always undertaken in many hospitals today.16,17
Blood cultures
Blood cultures have limited value because organisms isolated from blood in suspected VAP cases are often from extrapulmonary sites of origin.18 Blood cultures in patients with VAP are clearly useful if there is suspicion of another probable infectious condition, but the isolation of a microorganism in the blood does not confirm that microorganism as the pathogen causing VAP.
Quantitative cultures of airway specimens
Simple qualitative culture of endotracheal aspirates has high percentage of false-positive results due to bacterial colonization of the proximal airways observed in most patients in the ICU.20 Quantitative culture techniques suggest that endotracheal aspirate cultures (QEA) may have an acceptable overall diagnostic accuracy, similar to that with several other, more invasive techniques including BAL, protected BAL (pBAL) ,protected specimen brush (PSB) or tracheobronchial aspirate(TBA).7,19,20 Threshold values often employed for diagnosing pneumonia by quantitative cultures are ≥105 to 106, ≥104, and ≥103 CFU/ml for QEA, bronchoscopic BAL, and PSB, respectively, with ≥105 CFU/ml being the most widely accepted value for QEA.21,22,23 Also, blind aspiration sampling can lead to errors but bronchoscope also carries risks, such as inducing cardiac arrhythmia, hypoxemia, bleeding, pneumothorax, along with greater costs both in terms of time and resources. It is accepted that before administering the first dose of antibiotic or before any change in treatment patient specimens for culture should be taken, so that the results interpreted are valid.24 Lalwani et al., in their study, observed that culture results of a properly collected tracheal aspirate should be taken into consideration along with Centre for Disease Control and Prevention (CDC's) diagnostic criteria to maximize the diagnosis of VAP.25
The recent guidelines of Society for Healthcare Epidemiology of America/ Infectious Diseases Society of America (SHEA/IDSA) recommend Gram staining of endotracheal aspirates. However, the sensitivity (57-95%) and specificity (48-87%) of this technique are highly variable. The role of procalcitonin and other biomarkers for the diagnosis of VAP is yet unsubstantiated.5,26
Since VAP diagnosis founded on radiographic findings of pneumonia, which have intrinsic variability in technique, interpretation, and reporting, and on clinical signs and symptoms- that are subjective- in 2011 a Working Group of the CDC proposed a new approach to surveillance for Ventilator-Associated Events (VAE). Table 1 According to the new CDC definition algorithm, VAP is an Infection-related Ventilator-Associated Complication (IVAC) occurring after 3 days of mechanical ventilation and 2 days before or after the onset of worsening oxygenation, if purulent respiratory secretions with positive cultures or objective signs of respiratory infection have been found.27
Table 1: CDC Algorithm for VAP diagnosis30
1= Purulent respiratory secretions AND one of the following:
2= One of the following (without requirement for purulent respiratory secretions):
Positive culture of endotracheal aspirate, ≥ 105 CFU/ml *
Positive pleural fluid culture
Positive culture of bronchoalveolar lavage, ≥ 104 CFU/ml*
Positive lung histopathology
Positive culture of lung tissue, ≥ 104 CFU/ml*
Positive diagnostic test for Legionella spp.
Positive culture of protected specimen brush, ≥ 103 CFU/ml*
Positive diagnostic test on respiratory secretions for influenza virus, respiratory syncytial virus, adenovirus, parainfluenza virus
On or after calendar day 3 of mechanical ventilation and within 2 calendar days before or after the onset of worsening oxygenation, criteria 1 or 2 is met (*or equivalent semi-quantitative result).
Table 2: Practices for which insufficient evidence or no consensus exists about Efficacy8,57
Rotational or turning therapy
Routine use of turning or rotational therapy, either by ‘kinetic’ therapy or by continuous lateral rotational therapy
Systemic antimicrobial agent prophylaxis
Routine administration of systemic antimicrobial agent(s) to prevent pneumonia in those receiving mechanically-assisted ventilation. Changes in the antimicrobial agents class used for empiric therapy
Oral chlorhexidine
rinse for oropharyngeal colonization
Routine use of an oral chlorhexidine rinse for the prevention of healthcare-associated pneumonia in all postoperative or critically ill patients and/or other patients at high risk for pneumonia.
Ventilator breathing circuits with HMEs
No recommendation can be made for the preferential use of HMEs to prevent pneumonia in patients receiving mechanically assisted ventilation No recommendation can be made for placing a filter or trap at the distal end of the expiratory-phase tubing of the breathing circuit to collect condensate
Suctioning of respiratory tract secretions
No recommendation can be made for the preferential use of either the multiuse closed-system suction catheter or the single-use open-system suction catheter
Prevention of aspiration associated with enteral feeding
Small-bore tubes for enteral feeding Enteral feedings continuously or intermittently should be given
Patient care with tracheostomy
Daily application of topical antimicrobial agent at the tracheostoma
Gloving
Wearing sterile rather than clean gloves when performing endotracheal suctioning
STRATEGIES FOR VAP PREVENTION
There are multiple recommended measures for prevention of VAP. Practices for which insufficient evidence or no consensus exists about efficacy are summarized in Table2. Preventive VAP strategies can be grouped into two classes: non-pharmacologic strategies, which are focused on preventing aspiration, and pharmacologic strategies, which are aimed at preventing colonization.
Non-Pharmacologic Strategies
Staff Education in the Intensive Care Unit
Various barriers to adhering to VAP prevention recommendations include disagreement with the reported results of source studies, resource paucity, elevated costs, inconvenience for nurses, fear of potential adverse effects and patient discomfort. There is considerable variability in practice between countries regarding humidification systems, intubation route, endotracheal suction system, kinetic therapy beds, subglottic secretion drainage and body position. For efficient patient care staffing must be sufficient while ensuring that staff is able to comply with essential infection control practices and other prevention strategies.17,28
Hand Hygiene
Microorganisms can be spread easily from patient to patient on the hands of healthcare workers. Moreover, wrist watches, rings, bangles and other jewelry commonly act as reservoirs for organisms, and impede effective hand cleaning. Moreover, healthcare workers compliance to hand hygiene is low, and high workload decreases their compliance.29
Impact of patient position
Patients positioned semi-recumbently 45 degrees have significantly lower incidence of clinically diagnosed VAP compared to patients positioned supinely.30 Moreover, the incidence of clinically diagnosed VAP among patients positioned prone, does not differ significantly from the incidence of clinically diagnosed VAP among patients positioned supine.31,32
Kinetic Beds
Critical patients often for a long time remain immobile in the supine position so the functional residual capacity is decreased because of alveolar closure in dependent lung zones and impaired mucociliary clearance. This leads to the accumulation of mucus, atelectasis onset and ensuing infection.33 Rotational therapy uses a special bed designed to turn continuously, or nearly continuously, the patient from side to side; specific designs include kinetic therapy and continuous lateral rotation therapy (CLRT).34,35
Artificial Airway Management
Oral vs Nasal Intubation: Both nasogastric and nasotracheal tubes can cause oropharyngeal colonization and nosocomial sinusitis. Thus, use of the oral route for both endotracheal and gastric intubation should be considered to decrease the risk of VAP.36
Endotracheal tube cuff pressure: The secretions that pool above inflated endotracheal tube cuffs may be a source of aspirated material and ensuing VAP. The pressure of the endotracheal tube cuff should be optimized in order to prevent the leakage of colonized subglottic secretions into the lower airways. Persistent pressures into the tube cuff below 20 cm H2O have been associated with the development of VAP.37
Silver-Coated Endotracheal Tubes: Silver-coated endotracheal tubes appear to be safe, reduces bacterial biofilm formation, has bactericidal activity, reduces bacterial burden and can delay airway colonization. However, further studies are needed to for determing its efficacy.38,39
Mechanical Ventilation Management
Ventilator Circuit Change: The CDCs recommendation was ‘do not change routinely, on basis of duration of use, the breathing circuit that is in use on an individual patient. Change the circuit when it is visibly soiled or mechanically malfunctioning.40
Humidification With Heat and Moisture Exchangers: The effect of HME in preventing VAP is still controversial and recent studies have failed to show a significant difference in rates of infection.41
Subglottic secretion drainage: Intermittent subglottic secretions drainage using inspiratory pause during mechanical ventilation results in a significant reduction in VAP.42 SSD reduces VAP in patients ventilated for >72 hours and should be considered with other recommended strategies such as semi-recumbent positioning.43
Pharmacologic Strategies
Modulation of Oropharyngeal Colonization
Policies encouraging routine tropical oral decontamination with chlorhexidine for patients merit reevaluation. It is a cheap measure, but whether is it a safe one − it does not select resistant microorganisms − remains to be investigated.8,44
Selective Decontamination of the Digestive Tract
Selective decontamination of the digestive tract (SDD) is the decontamination ofpotentially pathogenic microorganisms living in the mouth and stomach, whilst preserving the indigenous anaerobic flora. SDD is an effective and safe preventive measure in ICUs where incidence rates of MRSA and VRE are low, but in ICUs with high rates of multi-resistant microorganisms it is a measure that is effective but not safe.45,46
Stress Ulcer Prophylaxis
Patients at risk from important gastrointestinal bleeding (shock, respiratory failure requiring mechanical ventilation or coagulopathy) should receive H2 antagonists such as ranitidine rather than sucralfate.47
Ventilator sedation protocol
In patients receiving mechanical ventilation and requiring sedative infusions with midazolam or propofol, the use of a nurse-implemented sedation protocol decreases the rate of VAP and the duration of mechanical ventilation.48 An objective assessment-based Analgesia-Delirium-Sedation (ADS) protocol without daily interruption of medication infusion decreases ventilator days and hospital length of stay in critically ill trauma patients.49
Antibiotic Policy and Infection Control
Rational antibiotic policy is a key issue for better patient care and preventing antimicrobial resistance.50,51 Infection control programs like using a scheduled switch of antibiotic class have demonstrated efficacy in reducing nosocomial infection rates and restraining multidrug resistant (MDR) microorganism emergence.52
VAP prevention in low resource/developing countries
Though the incidence of VAP has declined in the developed countries, it continues to be unacceptably high in the developing world. Its incidence in these countries is 20 times that in the developed nations with significant morbidity, mortality, and increase in ICU length of stay, which may represent an additional burden on the scarce resources in developing countries.53 Insufficient preventive strategies and probably inappropriate antibiotics administration may have lead to this scenario. Since microbiology and resistance pattern in India is different from other countries, there is need for data from our country to choose appropriate antimicrobials for management.54 Simple and effective preventive measures can be instituted easily and at minimal costs. Such measures might include hand hygiene, diligent respiratory care, elevation of head, oral and not nasal cannulation, minimization of sedation, institution of weaning protocols, judicious antibiotics use, de-escalation, and leveraging PK/PD characteristics for antibiotics administered. More costly interventions should be reserved for appropriate situations. Strategies to prevent VAP, probably by emphasis on practical, low-cost, low technology, easily implemented measures is need of the hour.
Ventilator-associated events (VAE) surveillance: an objective patient safety opportunity
Surveillance for ventilator-associated pneumonia is challenging and contains many subjective elements, including the use of chest x-ray evidence of pneumonia. In January 2013, CDC convened a VAP Surveillance Definition Working Group which transitioned VAP surveillance to ventilator-associated event (VAE) surveillance in adult inpatient settings.55 The VAE algorithm—which is a surveillance algorithm and not intended for use in the clinical management of patients—consists of 3 tiers of definitions: Tier 1, Ventilator-Associated Conditions (VAC); Tier 2, Infection -related Ventilator-Associated Complications (IVAC); and Tier 3, Possible and Probable VAP.27 The tier 1, VAC attempts to identify sustained respiratory deterioration episodes, and capture both infectious and noninfectious conditions and complications occurring in patients receiving mechanical ventilation. The tier 2, IVAC, is intended to identify the subset of VACs that are potentially related to pulmonary and extra pulmonary infections of sufficient severity to trigger respiratory deterioration. The tier 3, possible and probable VAP, attempts to identify IVAC patient subsets with respiratory infections as manifested by objective evidence of purulent respiratory secretions (where purulence is defined by using quantitative or semi-quantitative criteria for the number of neutrophils on Gram stain) and/or positive results of microbiological tests done on respiratory specimens. Because of the wide range of the lower respiratory tract specimens, their collection procedure as well as in laboratory processing and reporting of results, the Working Group of CDC determined that it was not appropriate to include these data elements in the VAC and IVAC definitions.56
This 3 tier approach is ineffective to accurately identify VAP for surveillance purposes and focuses on more mechanical ventilation complications. This approach may also reduce the likelihood of manipulation that could artificially lower event rates. Most VAEs are caused by pneumonia, pulmonary edema, atelectasis, or acute respiratory distress syndrome. In few recent studies concordance between the VAE algorithm and VAP was found to be poor.57 Thus, more studies are needed to further validate VAE surveillance compared with conventional VAP by using strong microbiologic criteria, particularly bronchoalveolar lavage with a protected specimen brush for diagnosing VAP and to better characterize the clinical entities underlying VAE.
Bundle approach to prevention of VAP
One of the five goals of the ‘Saving 100,000 Lives’ campaign, launched by the Institute for Healthcare Improvement is to prevent VAP and deaths associated with it by implementing a set of interventions for better patient care known as the ‘ventilator bundle’. The interventions should have scientific support of effectiveness, based on randomized controlled trials. All the elements of the bundles must be executed at the same time. The bundles for VAP includes four components: (a) elevation of the head end of the bed to 30-45º, (b) daily interruption of sedation, (c) daily assessment of readiness to extubate and (d) prophylaxis for deep venous thrombosis and peptic ulcer disease. The bundle approach to prevention of VAP has been found to be highly effective in reducing the incidence, mortality and ICU stay.5,58,59 The ventilator bundle should be modified and expanded to include specific processes of care that have been definitively demonstrated to be effective in VAP reduction. A multidimensional framework with a long-lasting program can successfully increase compliance with preventive measures directly dependent on healthcare workers bedside performance.
CONCLUSION
Ventilator Associated Pneumonia is one of the most common nosocomial infections in ICU presenting with non specific symptoms and clinical signs. Quantitative culture obtained by different methods, including EA, BAL, pBAL, PSB or TBA seem to be rather equivalent in diagnosing VAP. Clinical criteria used in combination, may be useful in VAP diagnosis; however, inter-observer variability and the moderate performance are to be considered.
Preventive strategies should focus on better secretion management and on reduction in bacterial colonization. Further research on targeted interventions is needed to effectively reduce VAP incidence. For VAP an approach based on multidisciplinary group is required including setting preventive benchmarks, establishing goals and time lines and providing appropriate education and training, audits and feedback to the staff, while continually updating themselves based on relevant clinical and preventive strategies.
During my placement in Psychiatry at the Brooker Centre, Runcorn, UK, I have come into contact with a wide array of psychiatric disorders, none more so than borderline personality disorder (BPD). It is undoubtedly one of the most prevalent problems in the area which the Brooker Centre serves. I can recall an example of a patient with BPD who had been quite unwell for a prolonged period of time and had struggled with affective instability. This patient had been quite successfully treated with Lithium therapy, has exhibited stability and is happy on the current treatment. There is a pattern of pharmacological treatment in BPD patients despite the fact that guidelines suggest otherwise…
Personality disorders are defined as ‘an enduring pattern of inner experience and behaviour that deviates markedly from the expectations of the individual’s culture, is pervasive and inflexible, has an onset in adulthood, is stable over time, and leads to distress or impairment’ . Personality disorders are representative of long-term functioning and are not considered in terms of episodes of illness 1.
The Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5), groups the various personality disorders into three clusters based on their descriptive similarities.
Cluster A includes the Paranoid, Schizoid, and Schizotypal personality disorders which are categorised as ‘odd/eccentric’;
Cluster B includes the Antisocial, Borderline, Histrionic, and Narcissistic personality disorders which are categorised as ‘dramatic/emotional/erratic’;
Cluster Cincludes the Avoidant, Dependent, and Obsessive-compulsive personality disorders which are categorised as ‘anxious/fearful’ 2.
The International Classification of Diseases, 10th edition (ICD-10), specifies the condition of emotionally unstable personality disorder which has two subtypes: The impulsive type and the borderline type. The borderline type in essence overlaps with the DSM-5 definition 3.
It has proven difficult to provide robust clinical recommendations with regards to the treatment of personality disorder. This is, in part, due to the fact that study populations are diverse but also compounded by the use of different assessment criteria. Furthermore, it is important to consider that personality disorders often present with a great deal of psychiatric comorbidity. Of the personality disorders, particular attention has been paid to borderline personality disorder (BPD) as the symptom clusters which it involves have been shown to improve considerably with treatment 4.
Figure 1: Diagnostic Criteria for Borderline Personality Disorder according to DSM-5 2:
A pervasive pattern of instability of interpersonal relationships, self-image, and affects, and marked impulsivity, beginning by early adulthood and present in a variety of contexts, as indicated by five (or more) of the following: 1. Frantic efforts to avoid real or imagined abandonment. (Note: Do not include suicidal or self-mutilating behaviour covered in Criterion 5.) 2. A pattern of unstable and intense interpersonal relationships characterised by alternating between extremes of idealization and devaluation. 3. Identity disturbance: markedly and persistently unstable self-image or sense of self. 4. Impulsivity in at least two areas that are potentially self-damaging (e.g. spending, sex, substance abuse, reckless driving, binge eating). (Note: Do not include suicidal or self-mutilating behaviour covered in Criterion 5.) 5. Recurrent suicidal behaviour, gestures, or threats, or self-mutilating behaviour. 6. Affective instability due to a marked reactivity of mood (e.g. intense episodic dysphoria, irritability, or anxiety usually lasting a few hours and only rarely more than a few days). 7. Chronic feelings of emptiness. 8. Inappropriate, intense anger or difficulty controlling anger (e.g. frequent displays of temper, constant anger, recurrent physical fights). 9. Transient, stress-related paranoid ideation or severe dissociative symptoms.
Borderline personality disorder is characterised by a pervasive instability in mood, interpersonal relationships, self-image and behaviour. The condition was first recognised in the United States by Adolf Stern in 1938. He described that these are a group of patients who neither fit into psychotic or psychoneurotic group, which gave rise to the term ‘borderline’. BPD is often diagnostically comorbid with depression and anxiety, eating disorders (notably bulimia), post-traumatic stress disorder, substance misuse and bipolar affective disorder. Furthermore, psychotic disorders have also been found to overlap. Due to this extent of comorbidity it is rare to see a patient who has a pure BPD 5.
The pharmacological treatments of BPD are tailored according to the symptom clusters that present. These include impulsivity, affective instability, transient stress-related psychotic symptoms and suicidal & self-injurious behaviours 5,6.
Recommended Psychological and Pharmacological treatment, 2009 National Institute for Health and Clinical Excellence (NICE) guidelines on Borderline Personality Disorder5, 7:
Psychological
NICE guidelines state that when offering psychology for BPD or for the individual symptoms of the disorder, brief psychological interventions (i.e. less than a 3 month period) should not be used. It states that the frequency of psychotherapy sessions should be adapted to the patient’s needs and ‘context of living’ and suggests that twice-weekly session may be considered. The guidelines also specify that for women with BPD for whom recurrent self-harm is a priority, a comprehensive dialectical behaviour therapy programme should be considered. NICE recommends that when psychological treatment is provided in BPD, the effects should be monitored using a broad range of outcomes. These should include personal functioning, drug and alcohol use, self-harm, depression and the symptoms of BPD.
Pharmacological
The NICE guidance states that drug treatment should not be used specifically for BPD or for the individual symptoms or behaviour associated with the disorder (e.g. repeated self-harm, marked emotional instability, risk taking behaviour and transient psychotic symptoms). It goes on to suggest that antipsychotics should not be used for the medium- and long term treatment of BPD. However, with regards to the management of comorbidities, it specifies that drug treatment may be considered and that in each case, the NICE guidelines for each comorbid condition must be referred to. Antidepressants, mood stabilisers and antipsychotics are commonly used in clinical practice. The guidelines mention that short-term use of sedative medication may be considered in a crisis. ‘Short-term’ denotes treatment lasting no longer than one week.
With regards to drug treatment during a period of crisis, NICE recommends that there should be a consensus among prescribers and other involved professionals about the proposed drug treatment and also that a primary prescriber should be identified. There should be an appreciation of the likely risks of prescribing, including alcohol and illicit drug use. NICE emphasises that the psychological role of prescribing (both from the patient’s and prescriber’s perspective) should be taken into account, and the impact that such prescribing decisions may have on the therapeutic relationship and overall care plan. NICE recommends that a single drug be used and that polypharmacy is to be avoided as much as possible.
In a crisis NICE recommends prescribing ‘a drug that has a low side-effect profile, low addictive properties, minimum potential for misuse and relative safety in overdose.’ The minimum effective dose is favourable, prescribing fewer tablets more frequently if there is a significant risk of overdose and also agreeing with patient on the symptoms that are being targeted. NICE suggests that following a crisis, a plan should be made to stop drug treatment that was started during a crisis. If this is not possible, a regular review of the effectiveness, side effects, misuse and dependency of the drug is advised. BPD patients can often have concomitant insomnia and for this, NICE details basic advice regarding sleep hygiene and forwards on to the guidance on the use of zaleplon, zolpidem and zopiclone for the short-term pharmacological management of insomnia.
AIMS AND OBJECTIVES
This report will review the current guidelines specifically regarding the management of borderline personality disorder and explore the literature according to the research recommendations that are set by NICE. The report is to focus on the two aspects of the management of BPD – The psychological/psychosocial aspect and the pharmacological aspect.
CURRENT NICE GUIDELINES ON PSYCHOLOGICAL AND PHARMACOLOGICAL TREATMENT OF BPD 7:
Psychology
Mentalisation-based therapy and dialectical behavioural therapy are proposed in the setting of a ‘well structured, high quality community based service’ e.g. a day hospital setting or a community mental health team. NICE suggests that these techniques should be compared with ‘high-quality community care delivered by general mental health service without the psychological intervention for people with BPD’ in order to measure efficacy. For outpatients, cognitive analytic therapy, cognitive behavioural therapy, schema-focussed therapy and transference focussed therapy are suggested and are catered to those with less severe BPD (i.e. fewer comorbidities, higher level of social functioning, greater ability to depend on self-management methods). Randomised controlled trials reporting medium term outcomes (e.g. quality of life, psychosocial functioning, employment outcomes and BPD symptomatology) of a minimum of 18 months are recommended
Pharmacology
Mood stabilisers are proposed as it is detailed that emotional instability is a key feature in BPD. In particular, topiramate and lamotrigine are mentioned as they have been shown to produce encouraging results in small-scale studies. A randomised placebo-controlled trial with medium to long-term follow up is recommended.
ANALYSIS
Psychology: Dialectical Behaviour Therapy (DBT)
Dialectics can be defined as the art of investigating the relative truth of opinions, principles, and guidelines 8. Dialectical in DBT refers to a means of arriving at the truth by examination of the argument i.e. the ‘thesis’ and ‘antithesis’ and resolving the two into a rational synthesis. DBT was introduced in 1991 by Marsha Linehan (a psychology researcher) and colleagues tailored as a treatment for BPD. In this, patients are supported in understanding their own emotional experiences and are taught new skills for dealing with their stresses. A combination of individual and group sessions are used. More adaptive responses and effective problem-solving techniques are integrated to improve functioning and quality of life as well as improving morbidity and mortality 9, 10.
A study published in 2015 by M. Linehan et al detailed a randomized clinical trial that set out to compare
1) Standard DBT (DBT group skills training + DBT individual therapy) with
2) A treatment that evaluated DBT group skills training with manual case management (i.e. with the removal of DBT individual therapy) and
3) A treatment that removed DBT skills training by providing only DBT individual therapy with an activities group and prohibited individual therapists from teaching DBT skills.
All 3 versions of the treatment were found to be comparably effective at reducing suicide attempts, suicidal ideation, medical severity of intentional self-harm, use of crisis services owing to suicidality and improving reasons for living 11.
Psychology: Mentalization based therapy
Mentalization is ‘the process by which we make sense of each other and ourselves, implicitly and explicitly, in terms of subjective states and mental processes.’ It is a social construct suggesting that we are attentive to the mental states of those we are with, physically or psychologically 12. Mentalization based treatment is a psychosocial treatment for BPD in which therapists monitor attachment and mentalizing capacity, and use interventions that aim to reinstate or maintain the capacity of patients to mentalize 13.
A longitudinal study, published in 2008, involving an eight-year follow-up of patients treated for BPD evaluated the effect of mentalization-based treatment (MBT) with partial hospitalization compared with treatment as usual. Five years after discharge from MBT, the MBT group exhibited clinical and statistical superiority to treatment as usual measured on suicidality, diagnostic status, service use, medication use, global function and vocational status 14. A more recent review article, published in 2015, emphasises the consideration of disruptions in three closely related domains in individuals with BPD. These are ‘in attachment relationships, in different polarities of mentalizing, and in the quality of epistemic vigilance and trust’. It is suggested that this approach allows seemingly paradoxical features of BPD patients appear more coherent. It is supposed that this approach provides a clear focus for the therapist enabling them to monitor the therapeutic process in terms of imminent mentalizing impairments and epistemic mistrust due to activation of the attachment system.
The article goes on to assert that the effectiveness of MBT in BPD may be elucidated due to the fact that it ‘enables the therapist to maintain and foster a mentalizing stance, even–and perhaps particularly–under high arousal conditions that are so characteristic of work with these patients’ 15.
Psychology: Cognitive analytic therapy (CAT)
CAT is a brief focal therapy that is informed by cognitive therapy, psychodynamic psychotherapy and elements of cognitive psychology. It was originally developed by Anthony Ryle tailored towards the needs of the NHS 16. It is based on a collaborative therapeutic position which sets out to create narrative and diagrammatic reformulations with patients concerning their difficulties. The theory centres on descriptions of sequences of linked external, mental and behavioural events. At first, the emphasis was on how such procedural sequences prevented revision of dysfunctional ways of living. More recently, this has been extended to understanding the origins of reciprocal role procedures in early life and their repetition in current relationships and self-management 17.
One study detailed a randomised controlled trial which aimed to investigate the effectiveness of time-limited CAT for participants with personality disorder. The study found that participants receiving CAT exhibited reduced symptoms and showed considerable improvement compared with the control group who showed signs of deterioration during the treatment period. They concluded that CAT is superior to treatment as usual in improving outcomes associated with personality disorder 18.
Psychology: Cognitive behavioural therapy (CBT) and Schema-focussed therapy
CBT is a time-limited, problem focussed psychotherapy that has been applied to a wide range of psychiatric disorders. The development of this technique was born out of the observation that patients referred for psychotherapy often would hold ingrained, negatively skewed assumptions of themselves, their future and their environment. The therapy is based on the notion that disorder is caused not by life events, but by the view the patient adopts of events. The therapy focusses on current problems and helps to develop new skills to provide symptom relief and sustain recovery 9, 19.
Initially CBT was predominantly insight-orientated, using introspection to bring about change. Beck et al began to integrate a range of behavioural techniques to improve the impact on dysfunctional controlling belief systems (schemas). The goal of treatment is not to replace the dysfunctional schemas; it aims to modify beliefs and develop new ones allowing the patient to cope more effectively in challenging situations 20, 21.
A 2013 review article that set to explore schema-focussed therapy concluded that schema-therapy is based on a ‘cohesive theoretical model’ and that there seems to be sufficient evidence supporting its validity. Regarding effectiveness, it goes on to indicate that one should be encouraged by the results of studies, however it points out that due to the small number of ‘methodologically-good efficacy studies’ it is difficult to be certain. The article claims that when evaluated against other psychotherapeutic treatments, specifically DBT and MBT, schema-therapy requires more investigation 22. A pilot study (2013) set out to monitor the effects of group schema-based CBT on global symptomatic distress in young adults with personality disorders or features of personality disorder. Their findings provide preliminary evidence that schema-based CBT might be an effective treatment 23.
Furthermore, there is a multicentre randomized controlled trial being conducted with the aim of investigating schema-focussed therapy versus treatment as usual in BPD, which has a closing date of 1st February 2016 24.
Psychology: Transference focussed therapy (TFT)
The classic use of the term transference originates in psychoanalysis and comprises “the redirection of feelings and desires and especially of those unconsciously retained from childhood toward a new object” 25. Transference-focussed psychotherapy is an evidence-based manualised treatment using a psychodynamic approach with a focus on object relations theory 26. TFT aims to ‘facilitate the reactivation, under controlled circumstances, of the dissociated internalised object relations in the transference relationship to observe the nature of the patient’s split polarised internal representations, and then, through a multistep interpretive process, work to integrate them into a fuller, richer, and more nuanced identity 27.
Yeomans et al produced an article in 2013 consisting of vignettes to illustrate the techniques used in TFT with the view to evaluate its use in treating BPD. Their findings supported the validity of TFT in treating BPD patients who specifically had difficulty with relationships.
They distilled TFT down to three important components 28:
1) The treatment contracting/setting the frame
2) Managing one’s affective response
3) The interpretative process
Pharmacology
A Cochrane intervention review assessing the effects of drug treatments in BPD, included twenty-eight randomised control trials, published in the period 1979-2009 (20 of 28 trials dating from 2000 or later), involving a total of 1742 participants 29.
Figure 2: The pharmacological agents that were tested included the following:
The authors arrived at the conclusion that pharmacotherapy in BPD ‘is not based on good evidence from trials’. The review found that there is support for the use of Second-generation antipsychotics (in improving cognitive-perceptual symptoms and affective dysregulation); Mood stabilisers (in diminishing affective-dysregulation and impulsive-aggressive symptoms); and Omega-3 fatty acids.
However, these claims were made based on single study effects and therefore require replication. No drug was found to significantly affect the symptom clusters, specific to BPD, including avoidance of abandonment, chronic feelings of emptiness, identity disturbance, and dissociation.
One noteworthy finding was that Olanzapine was associated with an increase in self-harming behaviour. Furthermore, the review states that ‘special attention’ is needed in BPD when prescribing tricyclic antidepressants (due to toxic effects in overdose) and hypnotics & sedatives (due to there being potential for misuse or dependence). Another problem that was highlighted was that in comorbid eating disorders the use of Olanzapine can contribute to weight gain and Topiramate can produce weight loss.
The review goes on to elucidate that there is not any evidence from randomised controlled trials that any drug reduces the severity of BPD and that it consists of ‘distinct pathology facets’. They recommend that the pharmacotherapy of BPD should be targeted at ‘defined symptoms’ and that polypharmacy is not supported by the latest evidence and should be avoided as much as possible.
The authors end by reaffirming that the evidence is not robust and that the studies may not satisfactorily reflect certain characteristics of the clinical environment. They propose that further research is needed in order to produce reliable recommendations. They detail the complications that arise from the ‘polythetic nature’ of BPD i.e. each patient is likely to experience different aspects of the disorder. There lacks a consensus among researchers about a common battery of outcome variables and measures. They comment that there is a fragmentary view on drug effects and that it is unknown as to how the alteration of one symptom affects another.
Comorbidity
Comorbidity is a foremost concern in the interpretation of data concerning personality disorders 30. A majority of individuals diagnosed with one personality disorder often meet criteria for at least one other personality disorder 30. A large proportion of patients with personality disorder have one axis I 31 disorder comorbidly, mostly depression, anxiety and alcohol and substance use disorders 32. [Axis I is a reference to the multi-axial classification system used in the Diagnostic and Statistical Manual of Mental Disorders that was removed in the latest version, DSM-5 2
It is important to consider therefore that, an improvement in the symptom clusters in personality disorders might be an improvement in comorbid axis I disorder symptoms. It is reported that the rates of depression are very high in BPD 33 and that the response to antidepressants in depressed individuals with comorbid personality disorders appears lower than in those without comorbid personality disorder 32.
The most recent guidance on the treatment of BPD from the National Health and Medical Research Council of Australia (NHMRC), which reviewed the literature and integrated a series of meta-analyses, details that pharmacotherapy does appear to be effective in altering the nature and course of BPD and that evidence does not warrant the use of pharmacotherapy as a sole or first-line treatment for BPD 34.
DISCUSSION
All of the aforementioned psychotherapy techniques are shown to produce promising results when applied to the treatment of BPD, with some standing out, such as DBT and MBT, due to the presence of a relatively robust evidence base. With such a wide variety of different approaches that all show some propensity for successful treatment of BPD it is clear that these approaches must be taken more seriously in clinical practice. These treatments have been shown to considerably improve symptomatic outcomes however there is a shortcoming in that they have failed to significantly improve social functioning. Each of the therapies follow distinct theories, however, when each treatment modality is applied to BPD, similar effects are seen. This is intriguing and should be explored further.
An analysis of these therapies revealed some common features which are now suggested as core requirements for all effective psychotherapeutic treatments:
Figure 3: Five common characteristics of evidence-based treatments for BPD 35, 36.
1. Structured (manual directed) approaches to prototypic BPD problems 2. Patients are encouraged to assume control of themselves (i.e. sense of self-agency) 3. Therapists help connections of feelings to events and actions 4. Therapists are active, responsive, and validating 5. Therapists discuss cases with others, (including personal reactions)
An update to the aforementioned Cochrane review 29 was published in 2013. The update focussed on the psychotherapies that are available for the treatment of BPD and included a total of 1804 participants spread over 28 studies. The psychotherapies discussed were divided into ‘comprehensive’ if they substantially involved an individual psychotherapy element or as ‘non-comprehensive’ if they did not. The comprehensive therapies included dialectical behaviour therapy, mentalization-based therapy (delivered in either a partial hospitalisation or outpatient setting), transference-focussed therapy, cognitive behavioural therapy, dynamic deconstructive psychotherapy, interpersonal psychotherapy and interpersonal therapy for BPD. These were assessed against a control condition and also with some direct comparisons against each other. Non-comprehensive psychotherapies included DBT-group skills training, emotion regulation group therapy, schema-focussed group therapy, systems training for emotional predictability and problem solving for borderline personality disorder (STEPPS), STEPPS plus individual therapy, manual assisted cognitive treatment, and psychoeducation 37.
The authors concluded that both comprehensive and non-comprehensive therapies indicated beneficial effects for the core pathology of BPD and associated general psychopathology. The authors identified that dialectical behaviour therapy had been studied the most comprehensively followed by mentalization-based therapy, transference-focussed therapy, schema-focussed therapy and STEPPS. However, the authors do state that none of the treatments presented a very robust evidence base and that there are concerns over the quality of individual studies 37.
In terms of pharmacotherapy, the NICE and NHMRC guidelines agree with the 2006 Cochrane interventional review among others 38, 39 that there is some evidence that some second-generation antipsychotics (aripriprazole and olanzapine) and some mood stabilisers (topiramate, lamotrigine and valproate) could improve BPD symptoms in the short term. However, for some of these agents, it is necessary to balance risks against benefits as they have considerable long-term risks (e.g. with antipsychotics, extrapyramidal side effects such as tardive dyskinesia can persist even after withdrawal of the drug 40). Such risks are not a problem in psychological treatments and it is probable that this influences guidelines. In practice, off-label use of psychotropics is widespread, despite the fact that the NICE guidance negates their use. It is arguable that clinicians should preferentially use pharmacological treatments that have the strongest evidence base (i.e. antipsychotics and mood stabilisers) and refrain from using agents with the least evidence (i.e. antidepressants and benzodiazepines).
CONCLUSION
Specialist treatments, in particular DBT and MBT substantiate the use of psychotherapy in BPD and these findings support the validity of the NICE guidance. However, the array of such treatments must be amalgamated with the view to provide a comprehensive, multi-faceted treatment approach. Each treatment must be broken down in order to outline the components that are particularly useful in BPD with the view to understand the condition in greater depth and to provide more focussed therapies.
The 2013 Cochrane review 37 highlights that further psychotherapies are available and have been shown to successfully treat BPD core pathology, however, as it is clearly stated the evidence base lacks robustness and there is a need for further studies that can replicate results. The therapies that have been included in this Cochrane review that have not been covered in the guidelines (e.g. STEPPS) may prove to be superior to those put forward by NICE, and I recommend that these be explored thoroughly when the guidelines are due for update.
While the NICE guidance emphasises that the use of psychotropics is reasonable in the management of comorbidities, it worth noting that to understand BPD, it is necessary to explore both the underlying aberrant psychological processes and biological processes that manifest in the disorder. This will enable the use of more specific pharmacological therapies in targeting the symptoms of BPD in the future.
Anterior knee pain or patellofemoral pain is a common clinical presentation especially in females. It is a challenging clinical problem. The specific cause can be difficult to diagnose as the aetiology remains poorly understood and there are various pathologic entities that can result in pain in the anterior aspect of knee.
Multiple surgical options have been used to treat the condition. Lateral retinacular release is one of these options and has been used to treat anterior knee pain with variable results1-5. The aim of this study was to assess isolated patella lateral retinaculum release as a treatment for anterior knee pain.
Materials and Methods
We performed a retrospective review of all the patients who underwent isolated arthroscopic lateral patella retinacular release under a single surgeon between July 2007 and July 2010. Exclusion criteria included significant patellar instability and severe mal-alignment on both radiological and clinical assessment and additional procedures including cartilage debridement, meniscal tear repair/excision or patella stabilization.
Data was collected from case notes (demographics, pre-operative and intra-operative findings and any post-operative complications), archived radiographs and postal questionnaires including pre and post procedure Oxford Knee Score (OKS), as well as patient satisfaction. Patient satisfaction questions included a grading of satisfaction of 1(completely dissatisfied) - 10 (completely satisfied) and whether patient would reconsider the procedure if given the choice again.
Independent factors assessed were age, sex, tight lateral retinaculum, osteoarthritic x-ray changes of all compartments, intraoperative findings of grade of arthritis and lateral subluxation and postoperative physiotherapy. The primary outcome assessed was patient reported outcome measures, including the improvement in post procedure OKS and patient satisfaction scores. SPSS Version 20 was used for analysis.
Preoperative and Postoperative OKS – total and components - were compared using Wilcoxon Signed Rank Test. The Mann Whitney U test was used for nominal data and Kruskal-Wallis test was used for continuous data for total OKS. Individual OKS components compared were ability to kneel and ability to climb stairs - more representative of patellofemoral joint.
Results
59 patients were identified with male to female ratio of 1:1.5. The mean age was 58.7 (range 25 to 77). 40 patients (67%) returned completed forms. Four patients had further surgery; three total knee replacement and one subsequent arthroscopic procedure for meniscal tears. These patients were excluded from the study. Four patients had bilateral procedures. Therefore after the exclusions for further surgery and those who failed to return completed forms 36 patients were included, on whom 40 procedures had been performed. Changes of osteoarthritis - graded according to Kellgren and Lawrence system - on the medial and lateral facets of the patella were noted on preoperative Merchant views (Table 1) and the tibiofemoral compartment as well.
Table 1 – Pre-Operative Radiographic grades of Patellofemoral change
Medial Facet
Lateral Facet
Grade
Frequency
%
Frequency
%
0
2
5
1
2.5
1
6
15
4
10
2
15
37.5
13
32.5
3
16
40
15
37.5
4
1
2.5
7
17.5
Total
40
100
40
100
All patients had undergone standardized preoperative physiotherapy regimen with no significant benefit. Two had, had intra-articular hyaluronic acid injection with no benefit.
All procedures were performed by a single surgeon (PE) and intraoperative findings of cartilage Outerbridge grade were noted in all compartments. Closed lateral retinacular release was performed with Smiley’s knife from just below lower end up to the upper border of patella.
Mean follow up duration was 20.43 months +/- 10.64. Patients were divided into three groups of follow up durations. 6-12 months had 6, 12-18months had 18 and >18months had 16 cases. The best results were in 12-18 month follow up but no statistically significant difference was found between different groups. There was no significant difference in age and gender distribution amongst different durations of follow up. Also there was no significant difference in age, gender and different durations of follow up between responders and non-responders of the questionnaire. There were no reported postoperative complications.
24(60%) underwent post-operative physiotherapy. The mean OKS improved from 23.05 (range11-40) to 35.30 (range14-48) [p value <0.0001]. Individual components of OKS, particularly ability to climb stairs and ability to kneel, also showed statistically significant improvements (Figure 1, Figure 2).
Fig 1 – OKS – ability to climb stairs
Fig 2 – OKS – ability to kneel
Univariate analysis showed improvement of total OKS and OKS for ability to kneel were significantly associated with higher grade of radiographic lateral patellofemoral joint wear (p value 0.025 and 0.042 respectively) and postoperative physiotherapy (p value 0.018 and 0.003) and improvement in OKS for ability to climb stairs was significantly associated with higher grade cartilage wear, noted intraoperatively, for trochlea (p value 0.042) and patella (p value 0.022).
However the OKS components lost this significance if there was Outerbridge Grade 3 or more wear in tibiofemoral articulation.
The procedure had a high mean satisfaction score of 8.2 (range 4 to 10), and 32 of 36 patients would have the procedure again if needed.
Discussion
Anterior Knee pain or patella pain syndrome is a very common clinical problem faced by orthopaedic surgeons. However the aetiology remains poorly understood. Mori et al6identified evidence of degenerative neuropathy in 29 out of 35 histologically examined specimens of resected lateral retinaculum; thus suggesting it may originate in the lateral retinaculum. Lateral Retinacular release would denervate this tissue producing symptomatic relief. Osterneier et al7 measured patellofemoral contact pressures and kinematics using fresh-frozen cadaver specimens both before and after lateral release. They concluded that release could decrease pressure on the lateral patella facet in flexion but did not stabilize the patella or medialise patella tracking. This possibly explains our finding of improvement with lateral patellofemoral joint wear.
Arthroscopic lateral release remains a controversial topic because of lack of well-designed randomised studies. Fulkerson and Shea8 suggested that knees showing lateral patellar tilt without subluxation were more likely to benefit from a lateral release in the absence of grade III or grade IV changes in the articular cartilage. Korkala et al9 showed that a lateral release tended to improve symptoms in patients with grade II to grade IV chondromalacia. Our findings concur that greater the patellofemoral articulation cartilage wear the more significant the improvement.
Lodhi et al10 performed a prospective study of elderly patients with patellofemoral osteoarthritis and pain which conservative management had failed to improve and concluded that the procedure improves function and provides significant pain relief successfully deferring need for arthroplasty; therefore they recommended the procedure in middle aged to elderly patients with symptomatic patellofemoral osteoarthritis.
Twaddle and Parkinson11 suggested lateral release to be an effective, reliable and durable procedure in ‘carefully selected patients’ through their retrospective study.
Our study has deficiencies regarding single surgeon series and retrospective review. However it reflects some of the findings from previous studies suggesting that it is an effective procedure to improve symptoms associated with cartilage changes in patellofemoral articulation without significant tibiofemoral joint osteoarthritis. Further well designed randomized controlled trials are needed to give a more definitive answer.
Conclusion
Isolated lateral patella retinacular release can be effective for anterior knee pain in carefully selected patients, (without significant instability or mal-alignment, with high patellofemoral but low tibiofemoral wear), who have failed conservative management. It particularly improves patients’ ability to kneel and climb stairs, giving a high satisfaction score. The grade of wear of patellofemoral cartilage is the most significant factor in determining this, with post-operative physiotherapy further augmenting the good results.
Three and half year old male child presented to PICU, Narayana health BANGALORE, with a short history of fever of 8 days with headache and cough for 2 days. At admission the child was febrile, dull looking, haemodynamically stable with no meningeal signs or focal neurological deficit. He was admitted and evaluated for the cause of fever. Same day child developed generalized seizures along with fever, hence a possibility of meningitis or electrolyte imbalance (hyponatremia) kept as child had initial serum sodium of 128meq/l. The cause of hyponatremia was looked into and child managed with antiepileptic drugs and 3% normal saline infusion. The initial sepsis screen was in-conclusive and CSF analysis showed 3 lymphocytes with low glucose and elevated protein levels, hence partially treated meningitis was considered (as h/o admission to a hospital for 3 days prior to admission in our hospital). The antimeningitic dose of IV antibiotics were given. On day 3 of admission child developed meningeal signs with worsening sensorium, hence neuro imaging was done (MRI brain) which showed multiple well defined ring enhancing lesion at bilateral central and cerebrallar hemisphere, thalamus, pons with mild perilesional edema. This radiological picture suggested a possibility of nerucysticercosis, however the clinical picture did not match with the same, hence pediatric neurology opinion was taken and simultaneous workup for tuberculosis were started. Child was also started on IV steroid. A strong possibility of CNS toxoplasmosis was kept by neurologist based on radiological picture. The workup for TB was inconclusive (negative mantoux, normal ESR, negative gastric aspirate for AFB) however child was empirically started on category II ATT in view of deteriorating clinical state. Repeat CSF evaluation showed increasing cell counts and similar biochemical picture as before, the sample was also send for Gene-Xpert (DNA amplification study). On day 6 of admission child developed lethargy and drowsiness hence antiedema measures were initiated. Same day he developed tonic posturing with unequal pupil, hypertension and bradycardia indicating raised Intracranial pressure (ICP), for which he was intubated and ventilated and urgent repeat CT head was done which showed increase in ventricular size and hydrocephalus. Immediately EVD was put by neurosurgeons after which there was gradual improvement in child’s condition and he was extubated within 48 hours. The reports showed negative HIV and toxoplasma serology and positive CSF gene study for AFB confirming the diagnosis of CNS tuberculosis, hence ATT and antiedema measures were continued and the EVD was later converted into VP shunt. Child by 2nd week of illness became afebrile with improved sensorium and function.
Fig 1 & 2: MRI showing multiple ring enhancing lesions
Discussion
Tuberculosis remains a leading cause of morbidity and mortality in the developing world. CNS involvement is thought to occur in 2-5% of patients with tuberculosis and up to 15% of those with AIDS related tuberculosis 1,2. Although CNS involvement by tuberculosis is seen in all age groups, there is a predilection for younger patients, with 60-70% of cases occurring in patients younger than 20 years of age 2. Haematogeneous spread from the lungs or gastrointestinal tract is most common, leading to small subpial or subependymal infective foci. These are termed Rich foci and form a reservoir from which intracranial manifestations may arise 3,4. Tuberculomas often present with symptoms and signs of focal neurological deficit without evidence of systemic disease. The radiologic features are also nonspecific and differential diagnosis includes malignant lesions, sarcoidosis, pyogenic abscess, toxoplasmosis and cysticercosis.5,6
Regarding treatment, the Center for Disease Control and Prevention recommends 12 months of treatment for CNS TB when the MT strain is sensitive to all drugs.7 However numerous variables can affect the response of the disease to therapy and it has been suggested that treatment duration should be tailored to the radiological response.8 After 12 months of treatment more than two-thirds of the patients still have contrast enhancing lesions. Although it is not clear if this represents an active lesion or just inflammation, continuing treatment is probably prudent. Total resolution of the tuberculoma is observed when scans demonstrate no enhancing lesions or only an area of calcification.8
In the case described above child had tubercular mengitis, multiple tuberculous, hydrocephalus and raised ICP. Although clinical presentations were suggestive of same, however the radiological picture and initial CSF finding raised suspicion is diagnosis. As tuberculoma and NCC shows many common clinical features, there are few distinguishing features such as the cysticercosis is smaller, less perilesional edema, multiple numbers and less of midline shift as compared to tuberculoma. However in our patient the multiple tuberculi gave a suspicion of NCC. It was only gene expert which confirmed our diagnosis.
Hence clinical cases like Tuberculoma, the radiological findings of which can usually be distinguished from other common illness like Neurocysticercosis or Toxoplasmosis, sometimes pose challenge in terms of radiological diagnosis suggesting the need for detailed evaluation to reach the diagnosis and guide treatment.
The state of pregnancy results in a multitude of cutaneous changes in the female. These are a reflection of the profound alterations in the endocrine, metabolic and immunological profiles that occur during this period.1 Skin manifestations occur due to the production of a number of proteins and steroid hormones by the fetoplacental unit and also by the maternal pituitary, thyroid and adrenals.2 The placenta, a new endocrine organ in the woman, produces progesterone. Dehydroepiandrosterone is produced by the fetal adrenals from pregnenolone and this is aromatized to estriol. At term, the level of progesterone is 7 times, estradiol is 130 times and prolactin level is 19 times of that present at 8 weeks of gestation.3 There occurs an overall preference for the Th2 cytokine profile, which helps in fetal protection from the immune system.4 This is due to the high levels of progesterone, which promotes Th2 cytokines like IL-4, IL-5 and IL-10 and has inhibitory effects on TNF alpha production. Oestrogen suppresses IL-2 production. The postpartum period is marked by withdrawal of hormones and consequent elevation of Th1 cytokine levels.4
Cutaneous changes develop in more than 90% of all pregnant females.5 These include common cutaneous changes that occur in most cases to severe diseases, some of which are seen exclusively in the pregnant and postpartum state. Cutaneous manifestations can be grouped into three broad categories: physiological cutaneous changes related to pregnancy; diseases modified by pregnancy and specific dermatoses of pregnancy.6
PHYSIOLOGICAL CHANGES IN PREGNANCY
These changes are so common that they are not considered abnormal. Rather, they provide contributory evidence of a pregnant state. This however, does not mean they are cosmetically acceptable to all patients. The various physiological changes during pregnancy have been summarized in Table 1.
Table 1: Physiological changes in pregnancy
Pigmentation Generalized hyperpigmentation Pigmentation of inner thigh, genitalia, axilla Secondary areola Linea nigra Chloasma Prominence/ appearance of pigmentary demarcation lines Enlargement and darkening of freckles, naevi and scars
Vascular changes Oedema of distal extremities and hands Spider angiomas Palmar erythema Leg varicosities Rectal haemorrhoids Cutis marmorata Capillary haemangioma
Glandular changes Miliaria Dyshidrotic eczema Montgomery’s tubercles Aggravation of acne
Oral mucosal changes Oedema and hyperaemia of gingivae Pregnancy epulis
Hair changes Hirsuitism Hypertrichosis Delayed anagen release after delivery
Nail changes Brittle nail plate Onycholysis Beau’s lines after delivery
Pigmentation:
Hyperpigmentation is one of the most common and early signs of pregnancy, seen in more than 90% of patients.7 High levels of Melanocyte Stimulating Hormone (MSH), oestrogen and progesterone are believed to be responsible for hyperpigmentation. Progesterone augments the oestrogen mediated melanin output, the levels of which correlate with pigmentary changes.8
Generalized hyperpigmentation is seen which is more marked in the dark haired skin.6 Pigmented areas of the body, namely the genitalia, perineum, areolae and upper medial thighs, demonstrate more pronounced pigmentation. Linea nigra, a hyperpigmented line extending from the pubic symphysis to umbilicus and further up to the xiphisternum, replaces the linea alba.9 Chloasma, also termed as mask of pregnancy, is the well marginated brownish pigmentation of the face like melasma. It is seen in 45-75% of pregnant women in western literature but in less than 10% cases in women with pigmented skin.5,10,11 Pigmentary demarcation lines appear on the limbs with borders of abrupt transition; freckles, naevi and scars tend to darken and enlarge.12
The pigmentation gradually fades after delivery, though the resolution of skin colour is usually incomplete. Chloasma tends to persist in 30% cases postpartum.13 Sun protection and reassurance is all that is needed. Topical formulations containing hydroquinone and tretinoin are avoided in pregnancy and can be added after delivery.
Physiological connective tissue changes:
Gross distension of abdomen with adrenocortical activities are responsible for the red-blue depressed streaks seen on abdomen and breasts in 70-90% pregnancies, called striae distensae.5,14 These usually develop in the second trimester. Females with pre-existing striae on breasts and thighs are more likely to develop striae gravidarum15, seen in White women more than Asian and African-American.14 Preventive therapies are controversial and postpartum treatment options include topical tretinoin, excimer laser or surgery.10
Soft tissue fibromas of pregnancy are called molluscum fibrosum gravidarum. They appear in the second trimester on the neck, face and beneath the breasts. These disappear after delivery.16
Physiological vascular changes:
Vascular growth factors released during pregnancy by the pituitary, adrenals and placenta are believed to be causative and this has been demonstrated in vitro as well.17 Non-pitting oedema of the face, hands and feet is present in around half of all females in the later part of pregnancy.13 This is probably due to sodium and fluid retention and pressure of the gravid uterus on the inferior vena cava. Spider naevi or spider angiomas are small raised lesions with a central pulsatile punctum and radiating telangiectatic vessels frequently present over the area drained by the superior vena cava. They are present in 67% of White women and 11% Black women during the second trimester.5 Palmar erythema is seen in two-thirds of White and one-third of Black women.8 Other vascular changes include varicosities of legs and anus (40%)13, cutis marmorata (0.7%)18 and capillary haemangioma (5%)9. These changes revert after the postpartum period.
Physiological glandular changes:
Eccrine gland activity is usually increased but the palms show decreased sweating. Thus, the incidence of miliaria and dishidrotic eczema is increased. There is inconclusive evidence to suggest that apocrine gland activity is decreased during pregnancy.19 Sebaceous activity increases in the third trimester leading to acne and enlargement of Montgomery’s tubercles.14 One-third to half of all pregnant women develop these tubercles, which are modified sebaceous glands.5,8 However, sebum excretion has not been found to decrease in lactating females post-delivery.20
Oral mucosal changes:
Oedema and hyperaemia of the gingivae in pregnancy is attributable to local irritation and nutritional deficiencies and is seen in around 80% women.5 Gingivitis not related to poor oral hygiene may occur. Granuloma gravidarum or pregnancy epulis might occur that regresses postpartum.
Hair changes:
Hair changes are seen in 3-12% of pregnant females.21 Hirsuitism and hypertrichosis occurs due to oestrogen. This leads to an increase in the percentage of hair in anagen.2 Approximately 2-3 months after delivery, loss of telogen hair occurs.22 This is termed as late anagen release as the hair follicles are no longer stimulated to stay in anagen phase by the maternal hormones. The hair recovery occurs in 3-12 months. A small number of females may experience episodic shedding of hair for long periods. This has been proposed to be due to the inability of some hair follicles to revert to asynchronous shedding.23 Rarely, male pattern baldness may occur in women.2
Nail changes:
Nail growth increases during pregnancy.6 Brittleness of the nail plate and distal onycholysis may be seen.19 Beau’s lines may develop after delivery.12 Reassurance is all that is needed for these benign nail problems.
DISEASES MODIFIED BY PREGNANCY
Many pre-existing dermatoses may be exacerbated or ameliorated by pregnancy. Certain tumours may also show remission or exacerbation. This is due to the shift in pregnancy to the Th2 state and a return to Th1 state in the postpartum period and also the discontinuation of some drugs due to their teratogenic potential.
Infections:
Depressed cell-mediated immunity makes the pregnant woman susceptible to more severe and frequent infections.24
Candidiasis is quite common and was found to be the commonest cause of white discharge per vagina, being present in 22% pregnant females.5 Half of all neonates born to infected mothers are positive for Candida and some may show signs of infection.25 Pityrosporum folliculitis, caused by Pityrosporum ovale, is more common in pregnancy.25
Genital warts are the commonest sexually transmitted disease seen in 4.7% subjects, these increase in size during pregnancy.9,25 Prophylactic caesarian section to prevent laryngeal papillomas in the neonate is not recommended now.26 Herpes simplex virus infection carries 50% risk of transmission to neonate in the primary episode and 5% risk in recurrent episode, caesarean section might be warranted to prevent such transmission.26 Varicella zoster virus infection has been reported to cause pneumonia in 14% of mothers and death in 3%.27 Bowenoid papulosis, caused by human papilloma virus appears first during pregnancy or may get aggravated.6
Pregnancy prepones the clinical manifestations in HIV infected females, possibly due to additive immune suppression. Pneumocystis pneumonia or listeriosis may prove to be fatal.27 Kaposi’s sarcoma may occur in these females.27 20-30% women present with leprosy for the first time in pregnancy and the postpartum period.28 The disease tends to downgrade towards the lepromatous pole in pregnancy and upgrades during lactation.29 Type 1 lepra reactions are more frequent in the first trimester and after delivery, whereas type 2 lepra reactions peak in third trimester.29 Trichomoniasis is diagnosed in 60% of pregnant women.25
Autoimmune diseases:
Systemic Lupus Erythematosus (SLE) is associated with a better prognosis than previously thought, if the disease is in remission and nephropathy and cardiomyopathy are not present.10 If the disease is active, half of the patients’ disease will get worse and there might be fatalities.14 SLE tends to be more severe if it first presents in pregnancy.14 Babies of such mothers are likely to develop neonatal lupus.
Patients with scleroderma are usually unaffected and some are improved in pregnancy. However, occasional reports of renal crisis, hypertension and pre-eclampsia are reported.30 Course of dermatomyositis is usually unaltered but the disease may worsen in some patients.31
Pemphigus tends to be exacerbated or present for the first time in pregnancy.32 The clinical presentation in pregnancy is similar to that of the regular presentation. Differentiation from herpes gestationis is important.
Metabolic diseases:
Effect of pregnancy on porphyria cutanea tarda is not clear, though some females show biochemical and clinical deterioration.33 Acrodermatitis enteropathica shows clinical worsening.34
Connective tissue diseases:
Pregnancy can lead to bleeding, uterine lacerations and wound dehiscence in patients of Ehlers-Danlos syndrome. Pseudoxanthoma elasticum patients may suffer massive gastrointestinal bleeds.35 Lichen sclerosis et atrophicus of the vulva usually improves in pregnancy and a normal delivery is mostly possible.
Disorders of glands:
Acne can aggravate during pregnancy. Hidradenitis suppurativa and Fox-Fordyce disease become better as a result of decreased apocrine gland activity.27
Keratinization diseases:
The course of psoriasis remains unaltered in 40% females during pregnancy while it improves in a similar percentage of females and worsens in the remaining.36 It is more likely to deteriorate in the postpartum period.37 Psoriatic arthritis has been found to worsen or present for the first time in pregnancy.2
Generalized pustular psoriasis of Von Zambusch may rarely occur. Though most patients have a preceding or family history of psoriasis, some may develop the disease without ever having a preceding episode.38 Peak incidence is seen in the last trimester and the disease tends to recur.38 Multiple, discrete, sterile pustules at the margins of erythematous macules on the umbilicus, medial thigh, axillae, inframammary folds, gluteal creases and sides of neck are seen. These break to form erosions and crusts. Painful, circinate mucosal erosions may form. Prednisolone is used for management.12 Von Zambusch pustular psoriasis of pregnancy was earlier termed ‘Impetigo Herpetiformis’ but the term is best avoided as it is impossible to differentiate it from the former, both clinically and histologically.6 Erythrokeratoderma variabilis is reported to worsen during pregnancy.27
Tumours:
A melanoma that develops during pregnancy carries worse prognosis but if pregnancy occurs after the tumour is resected, the prognosis is unaltered.39 Metastasis in the fetus has been seen and a minimum period of two years following tumour resection is recommended.32 A female with neurofibromatosis may develop neurofibroma for the first time in pregnancy or older neurofibromas may grow in size. Rupture of major vessels may occur.6 Pregnancy may worsen mycosis fungoides and eosinophilic granuloma.6
Miscellaneous diseases:
Prognosis of atopic dermatitis is unpredictable in pregnancy, with reports of both improvement and worsening.27 Predisposed patients may first develop atopic dermatitis during pregnancy.40 Allergic contact dermatitis may improve in pregnancy.12 Hand eczema may worsen in the puerperal period.6 Erythema multiforme may be precipitated by pregnancy.6 Autoimmune progesterone dermatitis has been described in pregnancy.12 This disease is characterized by hypersensitivity to progesterone demonstrated by a positive intradermal skin test and cutaneous lesions resembling urticaria, eczema, erythema multiforme and dermatitis herpetiformis.41 The disease is associated with fetal mortality and recurs in subsequent pregnancies.12
PREGNANCY SPECIFIC DERMATOSES
These are a heterogeneous group of inflammatory skin diseases specific for pregnancy.42 Most of these conditions are benign and resolve spontaneously in the postpartum period but a few of these are associated with fetal complications.42 Almost all of them present with pruritus and a cutaneous eruption of varying severity.5
Classification:
The first attempt to classify these conditions was made by Holmes and Black in 1982-83 who classified them into: a) Pemphigoid Gestationis (PG) or Herpes Gestationis(HG), b) Polymorphic Eruption of Pregnancy (PEP) or Pruritic Urticarial Papules and Plaques of Pregnancy (PUPPP), c) Prurigo of Pregnancy (PP) and d) Pruritic Folliculitis of Pregnancy (PF).43,44 Shornick was of the view that all patients with PF also had papular dermatitis, so he included PF in the PP group. He included Intrahepatic Cholestasis of Pregnancy (ICP) in his classification for dermatoses where secondary skin lesions due to scratching are produced. He proposed that failure to consider ICP in the classification has led to confusion in terminology of pregnancy specific diseases. Thus, his classification included PG, PEP, PP and ICP.45 Ambros-Rudolph et al carried out a retrospective review of 505 pregnant patients over a 10 year period and gave a more rationalised classification system in 2006. They clubbed PP, PF and eczema of pregnancy in one group called Atopic Eruption of Pregnancy (AEP) due to their overlapping features and found this group to be the most common pruritic condition in pregnancy. Thus, they proposed four conditions: a) AEP, b) PEP, c) PG and d) ICP.46 The various specific pregnancy dermatoses have been elaborated in Table 2.
Table 2: Comparison of different pregnancy specific dermatoses in relation to clinical characteristics, prognosis, investigations and treatment.
AEP
PEP
PG
ICP
Pruritus
+
+
+
+
Primary cutaneous involvement
+
+
+
-
Skin lesions
Eczematous or papular
Papules, vesicles and urticarial lesions
Vesiculo bullous lesion on urticarial base
Excoriations, papules secondary to scratching
Site of lesions
Trunk, extensors of limbs, rest of the body also involved
Abdominal involvement, in striae distensae, periumbilical sparing
Abdominal, particularly periumbilical involvement
Palms and soles followed by rest of the body
Time
First trimester
Third trimester, Post partum
Second and third trimester, post partum
Second and third trimester
Risk with primigravidae
-
+
-
-
Association with multiparity
-
+
-
+
Flare at delivery
-
-
+
-
Recurrence
+
-
+
+
Family history
+
+
-
+
Histopathology
Non-specific
Non-specific
Specific, sub epidermal vesicle
Non-specific
Immunofluorescence
-
-
Linear deposition of C3
-
Other lab findings
Ig E elevated
-
Indirect IMF +
Increased serum bile acids
Maternal risk
-
-
Progression to pemphigoid, thyroid dysfunction
Gallstones, Jaundice
Fetal risk
-
-
Prematurity, Small for age baby, neonatal blistering
Premature births, fetal distress, stillbirth
Treatment
Steroids, antihistaminics
Steroids, antihistaminics
Oral steroids, antihistaminics
Ursodeoxycholic acid
Atopic eruption of Pregnancy (AEP): (Syn: Besnier’s prurigo, prurigo gestationis, Nurse’s early onset prurigo of pregnancy)
It is the most common pregnancy specific dermatoses that includes eczematous or papular lesions in females with personal or family history of atopy and elevated IgE - accounting for nearly half of all patients.46 The disease tends to recur in subsequent pregnancies with 75% of all cases occurring before the start of the third trimester.47 It carries no risk for the mother or baby however, infant may develop atopy later in life.48 Treatment is symptomatic with antihistamines and corticosteroids.
E-type AEP: This group comprises of 67% of AEP patients and includes patients with eczematous features; previously referred to as Eczema of Pregnancy (EP). It was not until 1999 that a high prevalence of atopic eczema was noted in pregnancy.49 80% of pregnant women develop the first episode of atopic dermatitis during pregnancy.46 This is attributed to the Th2 cytokine profile in pregnancy and a dominant humoral immunity.4 It is more common in primigravida, in single gestation, begins in early pregnancy and affects whole body including face, palms and soles.46
P-type AEP: This group includes what was referred to previously as Prurigo of Pregnancy and Pruritic Folliculitis of Pregnancy. Prurigo of Pregnancy (PP) is seen in one out of 300 to 450 pregnancies and occurs predominantly in the second to third trimester.50 Excoriated or crusted papules are seen over the extensors of extremities and abdomen and are associated with some eczematization. The eruption lasts up to 3 months after delivery and recurrences in subsequent pregnancies are common.51 PP is associated with ICP with the differentiating feature being the absence of a primary lesion in the latter.50 Personal and family history of atopic dermatitis or raised IgE may be seen in PP.52 Serology is normal. There are no specific changes on histopathology and immunofluorescence results are found to be negative.50 There appears to be no maternal or fetal risk.45
Pruritic Folliculitis of Pregnancy (PF), first described by Zoberman and Farmer, is now believed to be as common as PG or PP, though only a few cases have been reported.50 It begins in the latter two trimesters and affects roughly one in 3000 pregnancies.51 Pruritus is not a defining feature, despite what the name suggests.2 Multiple, follicular papules and pustules occur on the shoulders, arms, chest, upper back and abdomen and are acneiform in nature.42 The lesions tend to resolve in a couple of months following delivery. Histopathological examination reveals non-specific features with sterile folliculitis and immunofluorescence studies are negative.50 No maternal or fetal risk is described except for low birth weight neonates in a single study.52 Pathogenesis of PF is unknown with no definite role of androgens or immunologic abnormalities.53 There is no evidence to suggest that it is a hormonally aggravated acne as proposed by some workers.54
Polymorphic Eruption of Pregnancy (PEP): (Syn: Pruritic Urticarial Papules and Plaques of Pregnancy or PUPPP, Bourne’s Toxaemic Rash of Pregnancy, Toxic Erythema of Pregnancy, Nurse’s Late Prurigo of Pregnancy)
With a prevalence of a case in every 130-300 pregnancies, this disease is the second most common pregnancy specific dermatoses and was seen in 21.6% pregnancies reviewed by Ambros-Rudolph et al.46 They found it began in late pregnancy in 83% cases and 15% in the postpartum period.46 The disease occurs predominantly in primigravida and a familial predisposition is present.55 Lesions are pleomorphic, usually urticarial but purpuric, vesicular, polycyclic and targetoid lesions may be present. The striae on the abdomen are the first to be involved and there is a characteristic periumbilical sparing.56 The lesions seldom occur on the body above the breast and on hands and feet.12 The lesions resolve with scaling and crusting in six weeks. The disease is more common in excessive weight gain during pregnancy and in multiple gestation.57,58 Histopathology is non-specific and shows spongiosis, occasional subepidermal split and eosinophilic infiltration. Serology and immunofluorescence is negative.50 Treatment is symptomatic, oral steroids are needed in severe cases. There are no associated maternal or fetal complictions,59 although infants may later develop atopic dermatitis.2
The pathogenesis is unknown however, the abdominal distension leading to collagen and elastic fibre damage in striae is hypothesized, leading to formation of antigens and triggering inflammatory cascade.60 The role of progesterone has been suggested by the increased progesterone receptor immunoreactivity in skin lesions of PEP.61 The discovery of fetal DNA in skin lesions of women with PEP has furthered the hypothesis that abdominal distension leads to increased permeability of vessels and permit chimeric cell migration in the maternal skin.62 Linear IgM dermatosis of pregnancy is an entity characterized by pruritic, red, follicular papules and pustules on the abdomen and proximal extremities seen after 36 weeks gestation and a linear band of IgM deposition on basement membrane zone. It has been characterized as a variant of PEP or PP by different authors.12
Pemphigoid Gestationis (PG): (Syn: Herpes Gestationis or HG, Gestational Pemphogoid, Dermatitis Herpetiformis of Pregnancy)
PG is the most clearly characterized pregnancy dermatosis and the one which also affects the fetal skin.63 It is a rare, self-limiting, autoimmune bullous disease with an incidence of 1:1700 to 1:50000 pregnancies.63 Mean onset occurs at 21 weeks gestation, though it occurs in the postpartum period in a fifth of all cases.64 Constitutional symptoms, burning and itching herald the onset of the disease. Half of patients develop urticarial lesions on the abdomen, particularly in the periumbilical region, that change rapidly to a generalized bullous eruption usually sparing the face, palms, soles and mucosae. Vesicles may arise in herpetiform or circinate distribution. Face is involved in 10% cases and oral mucosa in 20%.12 The disease shows spontaneous improvement in late gestation but flares may occur at the time of delivery in 75% of the cases.63 Though the disease may remit after a few weeks after delivery, a protracted course, conversion to bullous pemphigoid or recurrence with menstrual cycle and use of oral contraceptive pills has been reported.50 PG tends to recur in subsequent pregnancies in a more severe form and at an early stage with longer stay in postpartum.50 Skipped pregnancies have been described.63,65 The disease is also linked with hydatiform mole and choriocarcinoma.66
The classical histopathological finding is the presence of a subepidermal vesicle, spongiosis and an infiltrate consisting of lymphocytes, histiocytes and eosinophils.64 An inverted tear drop appearance due to oedema in the dermal papilla is seen in early urticarial lesions.15 Direct immunofluorescence reveals a linear deposition of C3 along the dermo-epidermal junction in 100% cases and is diagnostic of the disease, while a salt split skin shows an epidermal staining.67 Antithyroid antibodies may be present but thyroid dysfunction is not common.63 Systemic corticosteroids are the mainstay of management. About one in ten children born to women with PG develop blisters due to passive transfer of antibodies, this resolves on its own. Severity of the disease has been correlated with the risk of prematurity and small for gestational age babies.68
Pathogenesis of PG involves the production of IgG1 antibodies against NC16A domain of carboxyl terminus of Bullous Pemphigoid Antigen 2 (BPAg2), leading to activation of complement, recruitment of eosinophils to the local site and damage of the basement membrane and consequent blistering.2 The aberrant expression of MHC class II antigens of paternal haplotype is believed to stimulate an allogenic response to placental basement membrane and this is believed to cross react with the skin in PG.63,69
Intrahepatic Cholestasis of pregnancy (ICP): (Syn: Obstetric Cholestasis, Pruritus Gravidarum, Icterus Gravidarum, Recurrent Jaundice of Pregnancy, Idiopathic Jaundice of Pregnancy)
Pruritus in pregnancy is fairly common and can be due to various reasons like pregnancy specific dermatoses and other co-existing dermatoses such as scabies, urticaria, atopic dermatitis, drug reactions etc. It was found to be present in more than half of 170 pregnant women in an Indian study.70 This must be differentiated from ICP where the skin lesions arise secondary to itching.
ICP was first described by Kehr in 1907.63 ICP being referred to Pruritus Gravidarum (for pruritus without skin changes occurring early in pregnancy and related to atopic diathesis and no cholestasis) and Prurigo Gravidarum (for pruritus associated with PP like skin lesions and associated with cholestasis) lead to much confusion regarding nomenclature.63 The disease has an incidence of 10-150 cases per 10,000 pregnancies71, being more common in South America and Scandinavia, probably due to dietary factors.50 Patients complain of sudden onset pruritus beginning from the palms and soles and later generalizing to the whole body. Skin lesions are secondary to itching and range from excoriations to prurigo nodularis, extensors are more severely involved. Jaundice is seen in 20% cases only.72 Clay coloured stools, dark urine and haemorrhage secondary to vitamin K malabsorption can occur. Family history can be elicited in half of the cases and an association with multiple gestation is described.73 Resolution of ICP occurs soon after delivery. Recurrence in subsequent pregnancies is seen in 45-70% cases and routinely with the use of oral contraceptive pills, though no detectable abnormalities are seen in the duration between two pregnancies.63 Histopathology is non-specific and immunofluorescence is negative. Diagnosis is made by increased serum bile acid levels, transaminases are elevated. Prothrombin time may be prolonged. A 2.7 times increased risk of gallstones is reported in primigravida with ICP compared to non-pregnant women.74 ICP is associated with significant fetal morbidity including premature births in 20-60% cases, intrapartum fetal distress including meconium aspiration in 20-30% and fetal mortality in 1-2%.71 Risk is particularly more if serum bile acid levels exceed 40 micromoles per litre.75 Meconium may cause umbilical vein compression and induction of labour at 36 weeks gestation has been recommended in severe cases.50 The goal of treatment is reduction of serum bile acids. Ursodeoxycholic acid, given in the dose of 15mg/kg orally daily is the only proven therapeutic agent that decreases fetal mortality.63,76 Cholestyramine reduces vitamin K absorption and increases the risk of haemorrhage. Other agents like S-adenosylmethionine, dexamethasone, silymarin, phenobarbitone, epomediol and activated charcoal are not that effective and do not affect fetal risk.63 Topical emollients and antipruritic agents offer symptomatic relief but antihistamines are not that effective.50
The key event in the pathogenesis of ICP is elevation of bile acids. Oestrogens are said to have cholestatic properties by reducing hepatocyte bile acid uptake and also by inhibiting basolateral transport proteins.50 Progesterone may additionally saturate the transport capacity of these transport proteins in hepatocyte.71 Genetic predisposition occurs due to mutation in genes encoding bile transport proteins, with cholestasis developing in pregnancy as their capacity to secrete substance is exceeded.63 Bile acids passing through the placenta produce vasoconstriction of placental veins, fetal cardiomyocyte dysfunction and also abnormal uterine contractility, all leading to fetal hypoxia.71
CONCLUSION
Pregnancy is associated with a wide variety of cutaneous changes. These may range from common, benign changes termed physiological or more severe, posing significant risk to the mother as well as the baby. Physiological pregnancy changes may be of cosmetic concern to the patient and seldom need anything more than counselling. Pre-existing dermatoses may aggravate during this period, posing a challenge to the treating physician. Women suffering from such diseases need to be warned of complications and risks before trying to conceive. A strict watch for possible complications and appropriate management at an early stage is warranted. Women should also be looked for pregnancy specific dermatoses and their complaints should not be lightly overlooked as non-specific or physiological. Careful history and examination with a judicious use of investigations will help to arrive at a diagnosis and in prompt institution of treatment.
Meningiomas are common intracranial neoplasms with a wide range of histopathological appearances. The WHO classification of tumours of the central nervous system recognises 15 subtypes of meningiomas, of which meningothelial, fibrous and transitional subtypes are most common. Lymphoplasmacyte-rich meningiomas (LPM) are rare WHO subtype that belong to Grade I meningiomas.1 The estimated incidence is less than 1% of all meningiomas.2 LPM usually occurs in young and middle age patients, with most common locations being cerebral convexities, skull base, parasagittal area within the superior sagittal sinus, cervical canal, optic nerve and tentorium.3 Histopathological examination shows extensive infiltrates of lymphocytes and plasma cells often obscuring the meningothelial component.
Case report
A 21-year-old man presented with a history of headache since 4 months. It was a dull pain not associated with vomiting, seizures or visual symptoms. The patient did not have any features suggestive of cranial nerve involvement. Physical examination was unremarkable except for the presence of papilloedema. Non-contrast CT scan showed a large isodense lesion with peri- lesional oedema and eccentric enhancing nodular component in the right fronto-parietal region (Figure 1). A radiological diagnosis of glioma with mass effect and shift to left was rendered. A right frontoparietal free bone flap craniotomy was performed. Operatively, a well encapsulated tumour probably arising from the dura mater was found. Gross total removal of the tumour was done and the excised tumour was sent for histopathological examination with a provisional clinical diagnosis of meningioma.
Histopathological examination revealed a tumour arranged as sheets and whorls of meningothelial cells without any mitoses or atypia. A dense infiltrate of lymphocytes and plasma cells was seen in large areas of the tumour (Figure 2).
On immunohistochemistry, tumour cells were positive for epithelial membrane antigen (EMA) (Figure 3) and vimentin. The lymphoplasmacytic infiltrate contained mixture of CD3 and CD20 positive lymphocytes. A diagnosis of lymphoplasmacyte- rich meningioma was given.
Figure 1. Non-contrast CT scan showing a large isodense cystic lesion with perilesional oedema and eccentric enhancing nodular component in the right frontoparietal region
Figure 2: Tumour arranged as sheets and whorls of meningothelial cells without any mitoses or atypia. A dense infiltrate of lymphocytes and plasma cells seen in large areas of the tumour (H & E x 100)
Meningiomas are common neoplasms accounting for 24-30% of all primary intracranial tumours. They arise from the arachnoidal cells, and are typically attached to the inner surface of the duramater.1 Most of the meningiomas are benign, corresponding to WHO grade I and associated with a favourable clinical outcome. LPM is a rare low grade histopathological subtype of meningioma, usually seen in younger patients, with the mean age of onset being 34 years.4,5 The patients with LPM have variable clinical manifestations according to the location of the tumour. The common presentations include headache, hemiparesis, seizure, vomiting, dizziness, visual disturbance, dyscalculia, dysgraphia and slurred speech.3 Although the natural history of LPM is often over one year, few cases might occur in short duration due to inflammatory cell infiltration and oedema.6 Systemic haematological abnormalities such as hyperglobulinemia and iron refractory anaemia have been documented in some patients with LPM, believed by some to be due to the plasma cell infiltrate.3,6,7
Radiologically, LPMs are usually globular, highly vascular, contrast- enhancing, and dural based tumours. The typical characters of LPM on MRI are isointense lesions on T1-weighted images and hyperintense lesions on T2-weighted images, with a strong homogenous enhancement after the administration of gadolinium; obvious peritumoural brain oedema and dural tail signs.3 Sometimes, cystic component and heterogeneous enhancement may also be encountered, making pre-operative diagnosis difficult, as in our case.8
On microscopic examination, this tumour is characterised by a conspicuous infiltrate of lymphocytes and plasma cells, sometimes completely obscuring the tumour cells. The massive infiltration of lymphocytes and plasma cells has been postulated to play a central role in the development of brain oedema associated with LPM. The origin of this tumour (neoplastic or inflammatory) is unclear, so it is considered closer to intracranial inflammatory masses rather than typical meningiomas.7
The differential diagnoses include collision tumour of meningioma and plasmacytoma, inflammatory pseudotumour, idiopathic hypertrophic pachymeningitis (IHP), and intracranial plasma cell granuloma.3,7 The use of staining for EMA and vimentin is useful in indicating the meningothelial origin of the tumour, and differentiates LPM from other intracranial lesions.9
The pathological findings of IHP usually include thickened fibrotic dura mater with marked infiltration of lymphocytes and plasma cells, occasionally accompanied with small islands of meningothelial proliferation mimicking those of LPM. Localised nodular lesion can sometimes rule out this diagnosis in that IHP usually shows diffused lamellar thickenings or plaque-like features.4
Chordoid meningiomas often contain regions that are histologically similar to chordoma, with cords or trabeculae of eosinophilic, vacuolated cells in a background of abundant mucoid matrix background.3 Detailed histological studies can aid the differential diagnosis. The plasma cell component is not neoplastic and thus plasmacytoma with reactive meningothelial hyperplasia or a collision tumour involving meningioma and plasmacytoma can both be excluded.10
The knowledge of this rare entity is important to avoid its underdiagnosis as an inflammatory pseudotumour or plasma cell granuloma and overdiagnosis as a plasmacytoma.
The non-vitamin K antagonist oral anticoagulants have demonstrated favourable benefit–risk profiles in large phase III trials, and these findings have been supported by real-world studies involving unselected patients representative of those encountered in routine clinical practice and including those deemed ‘challenging-to-treat’
Accurate detection of atrial fibrillation and assessment of stroke and bleeding risk is crucial in identifying patients who should receive anticoagulation
Elderly populations represent a significant proportion of patients seen in general practice, and advanced age should not be regarded as a contraindication to treatment; acetylsalicylic acid is not considered an effective treatment option to reduce the risk of stroke in patients with non-valvular atrial fibrillation (except for those declining oral anticoagulation), particularly in fragile elderly patients, for whom this drug was historically prescribed
The frequency of follow-up visits, in particular to check compliance, should be tailored according to patients’ clinical characteristics and needs, but there is no requirement for routine coagulation monitoring, unlike vitamin K antagonists
Atrial fibrillation: a clinical and economic burden to society
Atrial fibrillation (AF) is the most frequently encountered sustained cardiac arrhythmia, with a prevalence of about 1.5–2% in the general population1,2. Its incidence is predicted to rise sharply over the coming years as a consequence of the ageing population and increased life expectancy in those with ischaemic and other structural heart disease2. In addition to being associated with significantly increased rates of mortality3, AF is also associated with significantly increased rates of heart failure, which is both a common cause and consequence of AF and greatly worsens the prognosis4. However, it is stroke that is the most devastating consequence of AF, with an average fivefold increased risk5.
AF-related strokes are often more severe than other strokes6,7because the clots that embolise from the left atrium or left atrial appendage are often much larger8than from other sources of emboli. These clots usually lodge in large cerebral vessels, commonly the middle cerebral artery, resulting in huge neurological and functional deficits and increased mortality compared with other stroke types. Moreover, the strokes suffered by patients with AF are more likely to lead to extended hospital care than strokes in patients without AF, thus impacting on patients’ quality of life7.
Current evidence suggests that, in the UK, AF has a causative role in almost 20% of all strokes9. This is likely to represent a significant underestimate given that long term electrocardiogram (ECG) monitoring in patients who would previously have been diagnosed as having cryptogenic stroke has demonstrated a significant AF burden in these patients10.
With improved AF detection and stroke prevention, it is estimated that approximately 8000 strokes could be avoided and 2100 lives saved every year in the UK, resulting in substantial healthcare savings of £96 million11,12.
A key objective of this short review is to provide primary care clinicians with the confidence to manage patients with AF in need of anticoagulation, including the safe and appropriate use of the non-vitamin K antagonist oral anticoagulants (NOACs) apixaban, dabigatran, rivaroxaban (approved in the EU, US and several other countries worldwide) and edoxaban (approved in the EU, US and Japan).13-20The focus will be on how to accurately identify, risk-stratify and counsel patients on the risks and benefits associated with the different treatment options.
Who to treat. Accurate detection and assessment of stroke and bleeding risk
Many patients with AF are asymptomatic, particularly the elderly, less active patients who may not notice the reduction in cardiac performance associated with AF. Unfortunately, it remains the case that AF is undetected in up to 45% of patients21, and stroke is very often the first presentation of AF.
Both the National Institute for Health and Care Excellence (NICE) and the European Society of Cardiology (ESC) guidelines recommend opportunistic screening in patients aged ≥65 years by manual pulse palpation followed by ECG in patients found to have an irregular pulse1,22. Opportunistic screening (manual pulse palpation) was shown to be as effective as systematic screening (ECG) in detecting new cases23, and this simple strategy should be used to screen at-risk patient groups as often as possible. Hypertension and increasing age are the two leading risk factors for developing AF, but other high-risk groups include patients with obstructive sleep apnoea, morbid obesity or a history of ischaemic heart disease24-26. In the context of proactive AF detection, many initiatives have been launched worldwide to encourage primary care clinicians to integrate manual pulse checks into their routine practice. The Know Your Pulse campaign was launched by the AF Association and Arrhythmia Alliance during Heart Rhythm Week in 2009 and was quickly endorsed by the Department of Health in the UK and by many other countries. This initiative has assisted in diminishing some of the gaps in AF detection21.
The most frequently used tools to evaluate stroke risk in patients with non-valvular AF (AF that is not associated with rheumatic valvular disease or prosthetic heart valves) are the CHADS227 and CHA2DS2-VASc28scores, with recent guidelines favouring the use of the latter and emphasising the need to effectively identify ‘truly low-risk’ patients1. The CHA2DS2-VASc score is superior to CHADS2 in identifying these truly low-risk patients, who should not be routinely offered anticoagulation1. Patients with any form of AF (i.e. paroxysmal, persistent or permanent), and regardless of whether they are symptomatic, should be risk stratified in this way. The risk of stroke should also be assessed using CHA2DS2-VASc in patients with atrial flutter and probably for the majority of patients who have been successfully cardioverted in the past22. Unless the initial underlying cause has been removed (e.g. corrected hyperthyroidism) and there is no significant underlying structural heart disease, the risk of patients suffering from a recurrence of AF following ‘successful’ cardioversion remains high29. The ESC guidelines recommend that anticoagulation should be offered to patients with a CHA2DS2-VASc score ≥1 based on assessment of risk of bleeding complications and the patient’s clinical features and preferences1.
The new Quality and Outcomes Framework (QOF) for 2015–2016 now recommends the use of CHA2DS2-VASc for risk stratification and no longer recommends antiplatelet agents as a therapeutic option for stroke prevention in patients with non-valvular AF30; this should result in significantly more patients receiving anticoagulation for this indication. The changes to QOF 2015–2016 compared with 2014–2015 are summarised in Table 130.
Table 1. Summary of changes to UK the Quality and Outcomes Framework (QOF) 2015–201630
NICE indicator ID
Changes
2014–2015 points
2015–2016 points
NM45: Patients with AF and CHADS2=1 currently treated with anticoagulant therapy or antiplatelet therapy
Retired
6
–
NM46: Patients with AF and a latest record of a CHADS2 ≥1 currently treated with anticoagulant therapy
Replaced by NM82
6
–
NM82: Patients with AF and CHA2DS2-VASc ≥2 currently treated with anticoagulant therapy
Replacement
–
12
NM81: Patients with AF in whom stroke risk has been assessed using the CHA2DS2-VASc risk-stratification scoring system in the preceding 12 months (excluding those with a previous CHADS2 or CHA2DS2-VASc ≥2)
New indicator
–
12
Key: AF = atrial fibrillation; CHADS2 = Congestive heart failure, Hypertension, Age ≥75 years, Diabetes, Stroke (doubled); CHA2DS2-VASc = Congestive heart failure or left ventricular dysfunction Hypertension, Age ≥75 years (doubled), Diabetes, Stroke (doubled)-Vascular disease, Age 65–74 years, Sex category (female); NICE = National Institute for Health and Care Excellence
The Guidance on Risk Assessment and Stroke Prevention in Atrial Fibrillation (GRASP-AF) clinical audit software detection tool is now very widely used in primary care to improve clinical outcomes in the AF population by identifying patients likely to benefit from anticoagulation. GRASP-AF systematically scans general practice software systems and calculates CHADS2 and CHA2DS2-VASc scores in patients who are coded as having AF, thus enabling physicians to identify high-risk patients who are not adequately treated for stroke prevention31. Identification of AF patients who are poorly controlled on warfarin (defined as having a time in therapeutic range <65% or a labile international normalised ratio [INR], e.g. one INR value >8 or two INR values <1.5 or >5 within the past 6 months)22 is crucial because these patients are more likely to experience major bleeding or stroke. These patients should be reviewed and, if possible, the cause for the poor warfarin control should be identified. The Warfarin Patient Safety Audit tool is another software tool that has been developed to help identify patients with poor warfarin control32.
Primary care clinicians are being urged to objectively assess the bleeding risk of AF patients who are receiving, or about to receive, anticoagulation1,22,32. HAS-BLED is the bleeding assessment scheme advocated by both NICE and the ESC1,22, this has been validated in several independent cohorts and was shown to correlate well with the risk of major bleeding, in particular intracranial bleeding1. The key aspect of HAS-BLED is that, unlike CHADS2 and CHA2DS2-VASc, it consists of risk factors that are modifiable. It should, therefore, not be a tool to influence the decision of whether to anticoagulate, but instead to identify ways to reduce the risk of bleeding in patients receiving an anticoagulant; for example, optimising blood pressure control, stopping unnecessary antiplatelet or anti-inflammatory agents and reducing alcohol consumption can all significantly reduce HAS-BLED scores and bleeding risk1. In addition, it needs to be emphasised that the absolute number of patients with AF experiencing a serious bleeding event while receiving anticoagulant therapy is low (~2–3%/year in the XANTUS, PMSS and Dresden NOAC Registry real-world studies) , with prospective real-world studies indicating that most bleeding events can be managed conservatively33-35. Whilst concerns have been raised about not having a reversal agent to counter the anticoagulant action of NOACs in patients who experience serious bleeding, the low incidence of major bleeding in real-world and phase III studies and its conservative management in most cases demonstrate that such agents would not be required routinely. Despite these low rates of major bleeding, reversal agents have been developed and successfully completed phase III studies and undergone approval in some markets, including idarucizumab in the UK36,37. Notably, high-risk patients with AF were shown to be more willing to endure bleeding events in order to avoid a stroke and its consequences38, thus reinforcing the message that “we can replace blood but we cannot replace brain tissue”.
Adequate anticoagulation therapy should follow appropriate patient identification
Identifying the right treatment option for patients with AF is likely to improve clinical outcomes. Involving patients in the decision-making process and rationale, and ensuring they understand the net benefit–risk of treatment options, is likely to lead to better compliance and improved clinical outcomes. The ESC guidelines consider patients with valvular AF (patients with AF in the presence of either rheumatic mitral stenosis [very rare now in the UK] or prosthetic heart valves) to be at high risk, and these patients should be anticoagulated with a VKA regardless of the presence of any other risk factors1. Warfarin is very effective at reducing the risk of stroke compared with acetylsalicylic acid (ASA)39,40, but an unpredictable dose–response relationship and multiple drug and food interactions can be problematic for some patients, and many patients remain sub-optimally treated41. ASA is also not considered an effective treatment option to reduce the risk of stroke in patients with non-valvular AF especially in frail, elderly patients in whom ASA was historically prescribed. The GARFIELD-AF registry (10,614 patients enrolled in the first cohort) revealed that real-world anticoagulant prescribing in AF populations deviates substantially from guideline recommendations: 40.7% of patients with a CHA2DS2-VASc score ≥2 did not receive anticoagulant therapy, and a further 38.7% with a score of 0 received anticoagulant therapy. At diagnosis, 55.8% of patients overall were given a VKA, just over one quarter (25.3%) received an antiplatelet drug alone, and ~4.5% received a NOAC24. Inappropriate prescribing was further confirmed by data from UK general practices (n=1857, representing a practice population of 13.1 million registered patients) using the GRASP-AF tool. Only 55% of patients with high-risk AF (CHADS2 ≥2) were receiving oral anticoagulation (OAC) therapy, whereas a further 34% of patients with no known contraindication did not receive OAC therapy42.
The NOACs have altered the landscape in terms of stroke prevention management by increasing the available options for patients. These agents exhibit some important practical advantages over traditional therapy (e.g. no requirement for routine anticoagulation monitoring, simple fixed dosing oral regimens, fast onset of action, fewer drug reactions and no food interactions), leading to their increased uptake in primary care.
Key patient groups who are likely to benefit from the NOACs include patients poorly controlled on VKAs, those predicted to require medications that interact with VKAs (e.g. patients who require frequent antibiotics), those without severe renal impairment or those with a prior ischaemic stroke while receiving a VKA with an adequate INR. These agents could also be a good choice for patients living a considerable distance from their local hospital or surgery and commuters. The NICE guidelines state that primary care clinicians should consider clinical features and patient preference before deciding on the most appropriate option for patients22. In addition, cost may be important in some settings. All of the NOACs have demonstrated cost-effectiveness versus warfarin, and although cost models vary by country, there is little doubt that these agents provide cost-effectiveness largely through the number of adverse events avoided and their associated costs43.
Choice of anticoagulant: which to choose?
The demonstration of a favourable benefit–risk profile (stroke prevention vs bleeding events) in large phase III studies involving over 70,000 patients has resulted in the regulatory approval of apixaban, dabigatran, edoxaban and rivaroxaban44-47for the prevention of stroke and systemic embolism in patients with non-valvular AF and one or more risk factors.
Overall, NOACs have demonstrated an improved benefit compared with warfarin, with lower rates of intracranial haemorrhage (for all NOACs) and similar or superior efficacy for stroke prevention44-48. Statistically significant relative risk reductions (RRRs) in the incidence of fatal bleeding events were seen with low-dose dabigatran (110 mg twice daily [bd]; RRR=42%), both tested doses of edoxaban (30 mg once daily [od] and 60 mg od; RRR=65% and 45%, respectively) and rivaroxaban (20 mg od; RRR=50%)46,47,49; rates of fatal bleeding were also lower in patients treated with apixaban compared with warfarin (34 patients vs 55 patients, respectively)44. These data are promising, especially considering the current lack of a specific antidote for any of the NOACs, and it is likely that the very short half-life of these drugs play an important role in mitigating the bleeding risk.
Owing to a lack of head-to-head comparisons between the NOACs in phase III clinical trials, patient characteristics, drug compliance, tolerability issues and cost may be important considerations1. In addition, subanalyses of phase III trial data for rivaroxaban, apixaban and dabigatran indicate that the challenging-to-treat patient groups often encountered by primary care clinicians can be treated effectively and safely with the NOACs (Table 2). A recent meta-analysis showed a similar treatment effect for almost all subgroups encountered in clinical practice; NOACs appeared to be at least as effective as VKAs in reducing the risk of stroke and systemic embolism and no more hazardous in relation to the risk of major bleeding events, irrespective of patient co-morbidities50.
Table 2.Novel oral anticoagulants studied in key patient subgroups*
Subgroup analysis
Rivaroxaban
Dabigatran
Apixaban
Factors related to disease
ROCKET AF
RE-LY
ARISTOTLE
Heart failure
ü59
ü60
ü61
Renal impairment
ü62
ü63
ü64
Prior stroke
ü65
ü66
ü67
VKA-naïve
ü68
ü69
ü70
Prior MI or CAD
ü(prior MI)71
ü(CAD or prior MI)72
üCAD73
PAD
ü74
–
–
PK/PD
ü75
ü76
–
East Asian patients
ü77
ü78
79
Elderly
ü80
ü49
ü81
Major bleeding predictors
ü82
–
–
Obesity
–
–
–
Diabetes
ü83
ü84
ü85
Valvular heart disease
ü86
–
ü87
Paroxysmal versus persistent AF
ü88
ü89
ü90
*No subgroup analyses have been presented for edoxaban Key: AF = atrial fibrillation; ARISTOTLE = Apixaban for Reduction In STroke and Other ThromboemboLic Events in atrial fibrillation; CAD = coronary artery disease; CHADS2= Congestive heart failure, Hypertension, Age ≥75 years, Diabetes, Stroke (doubled); MI = myocardial infarction; PAD = peripheral artery disease; PK/PD = pharmacodynamics/pharmacokinetics; RE-LY = Randomized Evaluation of Long-term anticoagulation therapy; ROCKET AF = Rivaroxaban Once daily, oral, direct factor Xa inhibition Compared with vitamin K antagonism for prevention of stroke and Embolism Trial in Atrial Fibrillation; VKA = vitamin K antagonist
Because patient selection in clinical trials is based on strict inclusion/exclusion criteria, patient populations in such studies are not always representative of patients routinely seen in real-world practice. In addition, bleeding events may be managed differently in clinical trials versus routine clinical practice. Real-world data are, therefore, needed to help validate drug safety and effectiveness in unselected patient populations. Following phase III clinical trials and the widespread approval of the NOACs in stroke prevention in patients with non-valvular AF, real-world experience has been steadily accumulating. The current real-world data for rivaroxaban, apixaban and dabigatran have been very reassuring and bridge the evidence gap between clinical studies and real-world experience33-35,51-57.
The lack of routine coagulation monitoring with NOACs does not remove the necessity for regular follow-up. Instead, the frequency of visits can be tailored according to patients’ clinical characteristics and needs. NOACs are all partially eliminated by the kidneys; therefore, regular monitoring of renal function is important either to use a lower recommended dose of these drugs or to avoid them. For example, renal function should be monitored every 6 months in patients who have stage III chronic kidney disease (creatinine clearance [CrCl] 30–60 ml/min)58. Apixaban, rivaroxaban and edoxaban are not recommended in patients with CrCl <15 ml/min, and dabigatran is contraindicated in patients with CrCl <30 ml/min13,15,17,19. Reduced-dose regimens of NOACs are recommended for patients at higher risk of bleeding events, including those with reduced renal function. For example, a reduced apixaban dose of 2.5 mg bd is indicated in patients with at least two of the following characteristics: age ≥80 years, body weight ≤60 kg or serum creatinine ≥1.5 mg/dl (133 μmol/l); a reduced rivaroxaban dose of 15 mg od is indicated in patients with CrCl 15‒49 ml/min58; edoxaban is recommended at a reduced dose of 30 mg od in patients with CrCl 15‒50 ml/min and contraindicated in patients with CrCl >95 ml/min58; and a reduced dose of 110 mg bd dabigatran should be considered in patients with CrCl 30‒50 ml/min who are at a high risk of bleeding58. Follow-up visits should also systematically document patient compliance, thromboembolic and bleeding events, side-effects, co-medications and blood test results58.
Conclusions
The NOACs have demonstrated favourable benefit–risk profiles in large phase III trials, and these findings have been supported by real-world studies involving unselected patients, including those deemed challenging to treat. The NOACs also address many of the limitations associated with VKA use, thus assisting with their integration into clinical practice for stroke prevention in patients with non-valvular AF. In addition, the results from subgroup analyses should provide primary care clinicians with the confidence to manage stroke-prevention strategies in a wide variety of patients with AF.
A 38 year old BMI 20.2 ASA 2 female underwent an elective robotic-assisted laparoscopic extirpation of endometriosis and dissection of endometriomas. Her medical history included hypertension, migraine, atopic dermatitis, sciatica, cervical spine spondylosis and dysplastic spondylolisthesis of L4/5. Of note, the patient had allergies to Aspirin (causing angioedema), Morphine and Tramadol (both causing generalized rash).
An 18gauge IV cannula was inserted into the cephalic vein at the left wrist, and connected to a bag of Hartmann’s solution. The patient was induced with Propofol 100mg, Rocuronium 30mg and a Remifentanyl infusion running at Ce 1ng/mL. Cefazolin 2g and Dexamethasone 4mg were also administered post-intubation. No rashes were noted on the patient’s skin, and her arms were subsequently enclosed with green towels by her sides for the duration of the surgery. During the procedure, the patient was sustained in a steep trendelenberg position, with her face and eyes checked periodically. No rashes were noted on any exposed skin. Peri-operatively, she was maintained with O2/air/Desfluorane, top-up doses of Rocuronium, and titration of the Remifentanyl infusion. At the end of the surgery, the patient was administered Ondansetron 4mg and Pethidine 50mg (in 2mL), and reversed with Neostigmine 2.5mg and Glycopyrrolate 0.4mg. The patient’s arms were subsequently exposed in preparation for transfer, and it was noted that she had developed severe erythema and inflammation in specific tributaries of the cannulated vein (Figure 1). The patient was extubated uneventfully five minutes later, and did not complain of any symptoms systemically or pertaining to the cord inflammation. She was monitored in recovery for three hours post-op, and the inflammation subsided significantly 90 minutes post-op (Figure 2) and completely 150 minutes post-op (Figure 3).
There have not been many reports of such a reaction in published materials, and we take this opportunity to provide further pictorial evidence of the possible sequelae of IV administration of a high concentration Pethidine solution. The variances in analgesia effectiveness and potential side effects between Morphine and Pethidine are negligible2. As such, and given that Pethidine is commonly used as a mode of analgesia on our wards and in the peri- and immediate post-operative periods when other classes of drugs are contraindicated, we hope to provide further pictorial support of such an extraordinary reaction for other interested clinicians. It is also interesting to note that in both cases the patient was female, around 40 years old, had a thin body structure, had an atopic tendency, and the concentration of injected solution was higher than 10mg/mL. Additionally, these are known factors believed to increase reaction severity3 4. We acknowledge that 3 other drugs were administered at the same approximate time as Pethidine, and as such any of the 4 medications could be culprit to the reaction, although this is unlikely as our patient had been given those medications in previous procedures with no issues or complications.
Figure 1: Post-op, Figure 2: 90 mins post-op, Figure 3: 150 mins post-op
India ranks second in world not only in terms of its population but also in disaster proneness.1 Disasters, whether they are natural or man-made, result in a wide range of mental and physical health consequences.2 International public agenda has taken notice of protection and care of children in natural and man-made disasters. This, in large part, is due to observations that those affected and overlooked often include children and adolescents.3 There is continuous controversy about the impact of disasters on victims including children4, 5and some investigations deny that serious psychological effects occur.6, 7, 8However further researchhas found that the criterion used in these studies were extremely narrow and inadequate and hence more systematic, clinically relevant investigations are required.9 For children and adolescents, response to disaster and terrorism involve a complex interplay of pre-existing psychological vulnerabilities, stressors and nature of support in the aftermath. Previous research has shown that direct exposure to differenttypes of mass traumatic events is associated with an increasein post-traumatic stress symptoms, 10, 11, 12, 13 anxiety, and depression, 11, 14 which are frequently comorbid withpost-traumatic stress reactions among youth.15 To the best of our knowledge, studies on long term psychological effects of disasters on younger age groups from South Asian countries are only a handful even though the frequency and the extent of natural disasters in this part of the world are considerable. As trauma during childhood and adolescents can etch an indelible signature in the individual's development and may lead to future disorder,16 it underscores the need for such studies.
A snowstorm followed by an avalanche took over a small mountainous village “Waltengu Nard” in South Kashmir, India on 19th Feb. 2005, about a month after the devastating Indian Ocean Tsunami. Of the total population, 24.77% (n=164) had perished. 17 As reported, the total population of children and adolescents prior to disaster was 242, of whom 52 died (21.49%).17 The present study is a discreet one which aims to determine long term psychiatric morbidity among the surviving children and adolescents of this disaster affected region five years after the snowstorm disaster. This is based on the notion that psychiatric disturbances can be present in children and adolescents years after a disaster has occurred.18, 19, 20 The socio-demographic variables of the patients are also studied. The results may support the need to apply wide area epidemiological approaches to mental health assessment after any large scale disaster.
Material and Methods
The study was designed as a survey of children attending school. Children from ages 6 years to 17 years from the high school near Waltengu Nard were taken up for the study. Only those children who were present in the area during the disaster were included in the study. Those with presence of any psychiatric disorder prior to the time of disaster, mental retardation, organic brain disorder, serious physical disability prior to disease (e.g. blindness, polio, amputated limbs etc.) or severe medical condition (e.g. congenital or rheumatic heart disease, tuberculosis, malignancy etc.) were excluded from the study. Within the school, an alphabetically ordered list was prepared including all classes of school with children aged 6-17 years 11 months. Every 3rd student on this list was chosen and subjected to inclusion and exclusion criteria until a sample size of 100 children was complete. Informed consent was obtained both from the child and one of his/her caregivers or parents.
Selected children were subjected to the Mini International Neuropsychiatric interview for children and adolescents (MINI-KID) for evaluation of symptoms and diagnosis which is a DSM-IV based diagnostic interview with high reliability and validity. 21, 22 A semi structured proforma was prepared for socio-demographic profile. Kuppuswamy's Socioeconomic Status Scale, 2007 was used for determining socio-economic status. 23 Oslo-3 Social Support Scale (OSS-3) was used to calculate social support. 24
Interviews were conducted following formal training in instituting MINI-KID by trained psychiatrists of the Department of Psychiatry GMC Srinagar. The data was then subjected to appropriate statistical methods. A p-value less than 0.05 was taken as significant.
Results
Of the 100 children and adolescents studied (41.32% of the affected population of children and adolescents) 41 were noted to have at least one psychiatric diagnosis (patients). The socio-demographic profile of these patients is represented in Table 1. Age and sex distribution of diagnoses is presented in Table 2 and Table 3 respectively.
A total of 54 diagnoses were observed in these 41 patients (Figure 1), with comorbidities present in 12 patients (29.27%). 11 of these 12 patients were experiencing two psychiatric disorders present concurrently and 1 was enduring three concurrent psychiatric diagnoses. Post-Traumatic Stress Disorder (PTSD) was the commonest comorbidity seen in 6 patients. This comes to 42.86% of total PTSD cases. This was followed by Major Depressive Disorder (MDD), Generalized Anxiety Disorder (GAD), Suicidality, Social Phobia, Panic Disorder, Agoraphobia and Separation Anxiety Disorder (SAD) in 2 each. Attention Deficit/Hyperactivity Disorder (ADHD), Conduct Disorder (CD), Specific Phobia (dark), Substance Abuse and Dysthymia were comorbid in one patient each. Studies have consistently shown presence of psychiatric comorbidities post-disaster.48, 49 Of the total 54 diagnoses, the commonest were Anxiety disorders (except PTSD), PTSD and affective disorders (includes MDD, dysthymia and mania) comprising 37.04% (N=20), 25.93% (N=14) and 14.81% (N=8) of total diagnosis respectively.
Figure 1: Diagnostic profile of the patients
Discussion
When children and their families are involved in natural or man-made disasters they may be exposed to diverse stressors which may impact mental health of the survivors, including children.25 Studies have suggested that reliance on parental reports of children’s distress may not be valid as parents typically under-report symptoms compared with child and adolescent self-report in mental health surveys.26 Thus in our study the psychiatric interview of each child was done individually without getting leads from their parents. In the early "heroic" and "honeymoon" phases of disaster relief there is much energy, optimism, and altruism. As fatigue sets in over the time and frustrations and disillusionment accumulate, more stress symptoms may appear.27 Accordingly, the study was carried out five years after disaster to catch this delayed response to disaster in the form of psychiatric morbidity.
Consequences of the extensive amount of stress on our sample population due to the snowstorm resulted in a high prevalence of psychiatric disorders in our sample which was apparently not due to any other psychological stress during this period. Despite the fact that the study was done five years after the disaster, the research generated high psychiatric morbidity. Many young survivors reported restlessness and fear with the return of the season in which snowstorm occurred. All these kept the memories of the disaster and the losses fresh in their mind thus not allowing the wounds to heal. Some said that they couldn't keep their minds off the snowstorm during the weeks approaching the anniversary. This was much like the so called anniversary reactions.28 Even children and adolescents, who have rebuilt their homes or found new dwellings to rent, frequently feel a sense of loss at the anniversary. Though the area was provided with adequate relief in terms of better infrastructure, education, employment and financial help in years post disaster to make their life without psychological distress, but, perhaps four such anniversary reactions and the fact that they are still living in the same geographical area and climate conditions have not allowed them to settle down in a routine since the psychological distress. Of the total sample of 100 patients, 41 % (N=41) reported at least one diagnosis. This is almost similar to a study by Kar and Bastia after a natural disaster in Orissa (cyclone) who found 37.9 % of adolescents with any diagnosis.29 Similarly Margoob et al found that 34.39 % had a psychiatric disorder at the end of one year, after disaster.17 Other studies yielded results in the range of 12% to 70% in terms of total psychiatric morbidity.26, 30-33
PTSD was the commonest individual diagnosis in our study with 14% (N=14) of the total population. Studies have shown PTSD prevalence after disaster from as high as 72 %34 to as low as 8 %.35 However, these were done immediately or within a few months after the disaster and the longitudinal pattern was not studied. A study conducted by Margoob et al reported a prevalence of 18.51 % in a sample of survivors one year after the same snowstorm on which this present study is based.17 Similarly, Bockszczanin et al 2.5 years after a flood in Poland reported 18 % of children to be suffering from PTSD.36 Thus our results of 14 % patients suffering from PTSD are also similar to the trend as we are studying them after a period of five years following the disaster. Diagnosis of PTSD in our study was more common among the pre-adolescent age group, 22.58 % (N=7) and adolescents 33.33% (N=2). Similar findings were reported by Hoven et al who found a prevalence of 20.1 % in this age group.30 Also PTSD was more frequent in females in our study. It was observed in 16.98 % females (N=9) as compared to 10.64 % for males. Hoven et al also found high prevalence in girls (13.3 % vs. 7.4 %).30
Anxiety Disorders (excluding PTSD) formed the most common collective diagnostic category in our sample. Anxiety disorders were present in 20 % (N=20) of our sample population which formed about 37.04 % of total diagnosis. These included cases of GAD 5% (N=5), SAD 4% (N=4), Social Phobia 3% (N=3), Agoraphobia 3% (N=3), Panic Disorder 2% (N=2), OCD 2% (N=2) and Specific Phobia 1% (N=1). Similarly Norris et al found anxiety in various forms in 32% of their sample of disaster victims.25 Similar findings were also reported by Reijneveld et al.37 Hoven et al in an important study after 9/11 found prevalence of various anxiety disorders to the magnitude of 28.6%.30 Our study correlated very closely to the above mentioned study. GAD was the commonest anxiety disorder among the above group. A prevalence of 5% (N=5) was found in the study sample. This prevalence was almost half of the earlier studies in children and adolescents after a disaster by Kar and Bastia29 where it was 12% and by Hoven et al 30 where it came out to be 10.3 %. However these studies were conducted within a few months after the disaster and hence came out with a higher prevalence of GAD than ours. It was more common in girls in contrast to boys (7.55 % vs. 2.12%) similar to study by Hoven et al.30 SAD was also seen to predominate in anxiety disorders with 4 % (N=4) of the sample receiving the diagnosis. Some studies like one by Hoven et al found it to be prevalent in 12.3 % of their sample 6 months after 9/11.30 However other studies have found SAD to be comparatively less frequent post disaster in children and adolescents.34 Thus our findings are modest and lie somewhere between the above two studies. Also ours was a long term study hence SAD figures are a bit low. SAD in our study was more prevalent in girls than boys (5.66% vs. 2.12%). Moreover, it was exclusively seen in ages below 10 years. The above findings are in tune with the study by Hoven et al.30 Panic disorder showed a low prevalence in our study and was found in only 2 % (N=2) patients. In both of these patients it was found to be comorbid, with MDD in one and with Agoraphobia in another. Studies immediately post disaster found the prevalence to be around 10.8 % (Math et al)32 and 8.7% (Hoven et al).30 However in the survivors of the same area, in which our study is based, an earlier study one year after the disaster found 3.08 % prevalence of panic disorder which is very similar to our study.17 It was more prevalent in females and is well correlated to a study by Hoven et al.30 Agoraphobia was present in 3 % (N=3) patients. It was comorbid in two patients with panic disorder and with PTSD, and an individual diagnosis in one. Hoven et al have found high rates of Agoraphobia post disaster i.e., about 14.8%.30 But again this study was done only 6 months after 9/11 hence more morbidity. Female preponderance of the diagnosis was established (3.77 % vs. 2.12 %) as with earlier studies.30 Obsessive traits are known to increase subsequent to disaster in the surviving population.38 Similarly 2 % of cases satisfying the criteria for OCD were seen in our study. The commonest obsessions were recurrent intrusive and annoying themes related to the disaster and ruminations about whether it could have been prevented, followed by worries related to harm befalling themselves, family members, or fear of harming others due to losing control over aggressive impulses. Other obsessive themes were related to scenes of trauma and commonly blood. Obsessions regarding extreme fears of contamination were also present.
The affective disorders have been studied less often than PTSD after disaster. Depression is known to occur with increased frequency subsequent to disaster.25 MDD was present in 4 % (N=4) of the total sample population. Studies conducted immediately after disasters have found higher prevalence. Math et al,32 Kar & Bastia29 and Catani et al33 found the prevalence of 13.5 %, 17.6 % and 19.6 % in their studies respectively. A study at three months and at one year after disaster on the adults in the same population as our study found the prevalence of MDD as 29.6 % and 14.28 % respectively.17 This decreasing trend is substantiated by the findings of our study and is in line with it. MDD was more common in females (5.66 % vs. 2.12%) which is similar to the study of Hoven et al.30 Our findings of increased prevalence of MDD in middle adolescents (7.69 %) as compared to other age groups is also comparable to Hoven et al.30 Of the Dysthymia cases, 3 % (N=3) were observed in our studies. Increased prevalence of dysthymia has not been reported post disaster in earlier studies. Our findings could be a part of large affective diaspora with dysthymia resulting from diminished self-esteem and a sense of helplessness subsequent to disaster. In addition to the time period for depression these patients were given the diagnosis of dysthymia because the depressed mood in them was more apparent subjectively than objectively. Finally these patients could have been on a natural course of dysthymia which usually begins in childhood. Combined dysthymia and MDD accounted for 7 % (N=7) of patients which if taken as a collective depression category, the results are slightly more comparable with the above studies. One patient had Mania (past). This patient had a positive family history of Bipolar Affective Disorder. This could be an incidental finding even though psychological stress is known to precipitate mania.39 Also the prevalence is 1 % in our study which is even less than the prevalence in general population thus it could be an artifact.40Studies have consistently found increased prevalence of adjustment disorder after disaster.41 In our study prevalence of adjustment disorder was 3% (N=3, anxiety 2, depression 1). In a study by Math et al 3 months after tsunami it was 13.5%.32 A lower prevalence in our study was again due to the long term nature of study. The role of trauma, stress, and negative life events as risk factors for suicidal ideation and behavior has long been recognized.42 A longitudinal investigation looking at the trends in suicidality and mental illness in the aftermath of Hurricane Katrina found significant increases in suicidal ideation and plans in the year after the disaster as a result of unresolved hurricane related stresses.43 The suicidality in our population sample was found to be 2% (N=2) of sample. These results were in tune with that of Kessler et al, although it was immediately after hurricane Katrina and hence a higher prevalence of 6.4%.43
Many symptoms of PTSD overlap with the symptoms of ADHD and CD.44 In our study, each of the disorders were present in 2 % of the sample. In one patient, they were comorbid with each other (ADHD with CD). In a study by Hoven et al 6 months after 9/11, the prevalence of CD was found to be as high as 12.8 %.30 This could be because of immediate post disaster nature of the study. Also because of the symptom overlap more weight was given to the PTSD diagnosis.
Three patients had a diagnosis of Substance Abuse, Tic Disorder and PDD, 1 % each. Though substance abuse is known to increase subsequent to disaster in adolescents,30 no evidence was found for relation of tic disorder or PDD to the post-disaster psychiatric stress. The cause of a low prevalence of substance abuse in our sample was because of the fact that the area is inhabited by Muslim population and hence alcohol is not religiously sanctioned, and, harder substances are either not available or they can’t be afforded. The only substance which is available is marijuana or cannabis. However, most used it only recreationally and the criterion for abuse was not met. Even the sole patient of substance abuse was also taking cannabis. Also, it is a well known phenomenon that drug dependent subjects do not reveal the true information and deny any history of abuse at first contact with the investigating team.45 Tic disorder and PDD are regarded as biological disorders and their relation to trauma is only incidental.46, 47
Studies have consistently shown presence of psychiatric comorbidities post disaster.48, 49 The same was observed in our study where 29.27 % of total patients had comorbid psychiatric diagnosis. Similar results were found by Kar and Bastia who found comorbidities in 39% of adolescents.29 PTSD is the most common comorbid disorder observed during the period post disaster48, 49 and the same was observed in our study with PTSD comorbid in 14.63 % (N=6) of cases. However when all the anxiety disorders were combined except PTSD, they were found to exceed the comorbidity of PTSD and they were comorbid in 21.95 % (N=9) patients. There is expanding literature regarding comorbidity of anxiety and depression in children and adolescents.50, 51, 52Similar comorbidity of an anxiety disorder (including PTSD) and depressive disorder (including Dysthymia) were seen in 7.32 % (N=3) of patients in our study. These results show that psychiatric diagnoses are frequently comorbid after the disaster and there is a need to be vigilant about them for a holistic psychiatric assessment, treatment and rehabilitation of the survivors.
Sociodemographic Profile: In our sample the prevalence of psychiatric morbidity was at maximum in pre-adolescents (6-10 years age group) and it was 61.29 % of the sample of pre-adolescents.This is consistent with the research that has suggested that younger children possess fewer strategies for coping with both the immediate disaster impact and its aftermath, and thus may suffer more severe emotional and psychological problems.53 Second commonest group was of 11-13 years with 40 % morbidity in them which was consistent with an earlier study which also found significant morbidity in this age group.54 The age characteristics of the total population also closely matched the above findings. More females than males were found to exhibit psychiatric morbidity in our study (47.17 % vs. 34.04 %). Though these findings were in tune with those of Hoven et al,their findings were a little lower than ours (34.7 % vs. 21.8 %).30 Some studies have found that girls express more anxiety-depressive disorders30 and PTSD symptoms55, 56 and boys seen to exhibit more behavioral problems.57 Similarly in our study rates of anxiety disorders, depressive disorders and PTSD were higher in girls and conduct disorder was exclusively found in boys.
Our study suggested that children up to 5th standard were (51.02 %) more susceptible than those in higher classes. This was in accordance with an earlier study by Kar et al.54 These findings are also in accordance with the findings of a study by Hoven et al. which found maximum morbidity (34.1 %) in preschoolers.30 Thus it could be said that higher educational status was protective, in addition to increasing age. Psychiatric morbidity was highest in children who were from joint family systems (48.15%). This was followed by children from extended nuclear (37.5%) and nuclear (27.27%) families. This pattern is consistent with an earlier study by Margoob et al.58 This could be because of the fact that in the sample of joint families there was loss of more family members in the tragedy. There were no cases of upper and upper-middle socio-economic class and lower-middle class was significantly less in our sample. This was because of the demographics per se and was not a sampling error. Consequently, higher morbidity was seen in the upper-lower socio-economic class (49.09%) followed by the lower class (31.71%). All the above findings are in accordance with an earlier study by Margoob et al.58
Psychiatric morbidity was not found to be influenced by the source of family income. Same was observed by Kar and Bastia in their study.29 The majority of the patients had poor social support (52.17%, p=0.03). These findings are substantiated by earlier studies.59 Loss of a parent was strongly associated with lower social support and high psychiatric morbidity. This was also reported by earlier studies.31, 60 Our study reported higher psychiatric morbidity in first-born children (71.43%). This could be due to increased burden of family matters on an eldest child subsequent to a disaster especially when head of family or mother has perished in such a catastrophe. This was in accordance with earlier studies on birth order and psychiatric morbidity.61 However in our study only childs also documented significant morbidity which is in contrast to earlier studies.61 This could be due to the fact that an only child had significantly less social support due to fewer siblings and death in the family due to disaster considerably compounding the problem. Also, often the youngest born is more pampered and hence more likely to feel emotionally insecure when attention is shifted from him in the aftermath of a disaster.
There was an unavoidable limitation in the study; the disaster-affected population was not compared with a normal or control population. The difficulty we faced was finding a control population as the area has a racially, geographically and culturally distinct population of Gujjars and all of them were affected. So no appropriate control group could be found. However if we compare it with most of the studies done in populations from the north India, the prevalence in our study is largely greater than those found In those studies.62
Conclusion
This research portrays and scrutinizes the experience of children and adolescents in the aftermath of a snowstorm disaster and supports the idea that children are susceptible to morbid psychological experiences long after the traumatic event has occurred. With that said, we want to stress the decisive role of support agents for children. These agents include the adults and peers who help children and youth recuperate in the long term. Provision of an outreach psychosocial and clinical service long after the disaster when no one is around to help after the initial knee jerk response of relief agencies is also stressed.
A 61-year-old lady underwent a modified radical mastectomy and axillary clearance in 2008 for a carcinoma of the left breast. Histopathology examination revealed two tumours within the left breast; a 16mm Grade 2 lobular carcinoma with probable vascular invasion and a 9mm Grade 1 infiltrating ductal carcinoma with no vascular invasion. She had clear resection margins. 21 out of 34 removed lymph nodes were positive for metastatic deposits. The tumour was oestrogen receptor positive and HER2 negative. She was staged as T1 N3a Mx and the tumour had a Nottingham Prognostic Index of 5.32. Metastatic workup revealed no distant metastasis.
Postoperatively, she required aspiration of a seroma but her recovery was otherwise satisfactory. She received adjuvant chemotherapy in the form of three cycles of Fluorouracil, Epirubicin and Cyclophosphamide and 3 cycles of Docetaxel. In addition, she had postoperative radiotherapy to the chest wall and supraclavicular fossa (40 Gy in 15 Fractions over 3 weeks) and hormonal therapy with Letrozole 2.5mg once daily.
The patient opted to undergo a prophylactic right mastectomy in 2010. She was regular in follow up and appeared to be free of disease recurrence for 6 years.
Her past surgical history included abdominal hysterectomy and bilateral salpingo-ophorectomy for fibroid disease as well as varicose vein stripping. She is a non-smoker and doesn’t consume alcohol. She had a family history of colon and cervical cancer in her uncle and sister respectively.
The patient visited the surgical outpatient clinic complaining of abdominal cramps, altered bowel habits and fatigue of a few months duration. There was no associated rectal bleeding, haematemesis, melaena, weight loss or urinary symptoms. Physical examination was unremarkable but she was noted to have gradually worsening renal function. Her symptoms were at first attributed to side effects of intravenous antibiotic treatment, which she received during an admission for cellulitis. She had already undergone an upper GI endoscopy which showed oesophagitis and ulceration; biopsies were within normal limits. She received treatment with proton pump inhibitors but her symptoms persisted.
A non-contrast abdominal CT scan was done, on account of her poor renal function, which showed bilateral hydronephrosis and thickening of the postero-superior aspect of the bladder wall. Considering the limitations of the non-contrast study, there were no other abnormalities. A colonoscopy was also done to investigate her altered bowel habit and it revealed a benign-looking stricture in the sigmoid about 25cm from the anal verge which was easily bypassed by the scope.
Figure 1. Benign stricture on flexible sigmoidoscopy
Biopsies of the sigmoid stricture showed an infiltrate of small to medium sized tumour cells in the submucosa, which had an Indian file pattern. They were positive for AE1/AE3 (pancytokeratins) and negative for CD68. They were positive for CK7 and negative for CK20, strongly positive for oestrogen receptors and HER2 negative. Taken in conjunction with the patient’s past history of an invasive lobular carcinoma of the breast, the appearance was consistent with a metastatic lobular carcinoma.
Figure 2. Clusters and cords of cells with positive cytoplasm for the cytokeratin immunostain CK7. Although the classical ‘Indian filing’ of lobular carcinoma is not well seen, the image clearly demonstrates that the large bowel glands are negative (normally CK20+, CK7-) and that the infiltrate is beneath the glandular mucosa (i.e. not originating from dysplastic glands within the mucosa and raising the possibility of infiltration from outside the bowel wall). The magnification is x200. Lobular carcinoma is usually CK7 +, CK20 -, ER +.
Figure 3. The same cells with their nuclei staining positively with an immunostain to oestrogen receptors. There are a few short chains of ‘Indian filing’ with the cells appearing rather rectangular in shape with straight margins. You can make out slight ‘moulding’ of the nuclei as they press against one another. The magnification is x 400.
Figure 4. Haematoxylin and Eosin section at 400 magnification. This shows a diffuse infiltrate of single cells with eccentric nuclei.
The patient required a right nephrostomy and a cystoscopy with left double J ureteric stent insertion to address her hydronephrosis and deteriorating renal function before undergoing restaging of her disease.
DISCUSSION
In patients with history of breast cancer, isolated GI metastases are less common than benign disease processes or second primaries of the GI tract.1.2 In a retrospective review, 73 out of 12001 cases of breast cancer had gastrointestinal metastases, out of which 24 were to the colorectum3 and invasive lobular carcinoma was the commonest histological subtype. 3.4 However, sixteen percent of patients with breast cancer have GI metastases at postmortem examination1.
There might be a long interval of time between the diagnosis of breast cancer and development of gastrointestinal metastasis which together with their rare occurrence and nonspecific clinical and radiological manifestations adds to the diagnostic challenge. The median interval between the diagnosis and the development of GI metastasis was reported to be 6 years (range 0.25 to 12.5 years) by Schwarz et al 5with 25 years being the longest reported in the literature.6 Because of this long interval the history of a primary breast cancer can be missed. This also highlights the importance of long term follow up and maintaining an index of suspicion when these patients develop GI symptoms.
In our case, the interval between the diagnosis of breast cancer and colonic metastasis was 81 months. Her GI symptoms were initially attributed to side effects of antibiotic treatment for cellulitis and dyspepsia before investigating her with a colonoscopy. Even at colonoscopy the appearance was that of a smooth benign-looking stricture which did not seem to harbour any sinister pathology
Histological examination is probably the most reliable tool to make a diagnosis and it is prudent in such cases to compare the specimen with the original breast tumour. In this case, there were two primary tumours; an invasive ductal carcinoma as well as a lobular carcinoma but the metastatic disease favoured the lobular component, which is consistent with other published reports in the literature. The reasons why metastases favour lobular carcinoma are poorly understood. One explanation is the loss of E-cadherin expression, a molecule involved in cellular adhesion, in invasive lobular carcinoma7. A similar case in which the primary was a mixed ductal and lobular type with lobular subtype colonic metastasis was reported by Uygun et al.8 Immunohistochemistry can also help in establishing a diagnosis. Metastatic breast cancers tend to be positive for Oestrogen or Progesterone receptors as well as Gross Cystic Disease Fluid Protein-15.9, 10 It is, however, worth noting that primary colonic cancers can be oestrogen receptor positive in 30 to 70% of cases.11
Accurate histopathological diagnosis probably saved our patient an unnecessary surgical treatment for a primary colonic neoplasm as the main focus of her treatment should be systemic therapy for metastatic breast cancer.
CONCLUSION
GI tract metastases from breast cancer are a rare occurrence. The patients may present after a long interval from the original diagnosis and the clinical and radiological features are nonspecific with the diagnosis often established on histological examination. Moreover, the history of breast cancer may not be elicited in all cases and these patients may present to a gastroenterologist or colorectal surgeon rather than a breast surgeon or oncologist. Therefore, remaining vigilant to this possibility is advised in any patient with a history of breast cancer who presents with unexplained GI symptoms.
The NHS is supplied with approximately 300 000 NG tubes per year1. There are approximately 800-1000 incidents reported to the NPSA every year related to the insertion and use of NG tubes. Difficulties are often encountered with their insertion, especially in the setting of general anaesthesia. Whilst stridor and injury related to the use of NG tubes has been previously reported2,3, it has been related to prolonged siting. We describe a case acute stridor occurring in the recovery room due to direct trauma of the airway upon NG tube insertion.
Case report
A 61-year-old male presented with clinical symptoms of bowel obstruction. His medical history included atrial fibrillation (AF) treated with flecainide, and a 30-pack year smoking history. He was previously independent with a World Health Organisation performance status of 0. After CT confirmation of bowel obstruction, he was scheduled for an emergency laparotomy. A predicted P-Possum 30 day mortality was calculated at over 10%. He required no organ support pre-operatively, although his AF was poorly controlled. He had a low thoracic epidural sited awake, followed by induction of general anaesthesia with a rapid sequence induction. An arterial line, a central venous line, and an NG tube were inserted once anaesthetised. The NG tube was documented as difficult to site, and there were several attempts at a blind insertion via the oral and nasal route, before successfully inserting under direct vision using a laryngoscope.
The operative finding was that of an unresectable caecal tumour, and a defunctioning loop colostomy was formed. The total duration was 150 minutes. Following peripheral nerve stimulation and administration of 2mg/kg Sugammadex, he was woken, extubated and escorted to recovery.
Within minutes of being in the recovery area, he was acutely stridulous. Emergency assistance was called, and after assessment he was given nebulized adrenaline (1mg diluted in 4mls 0.9% saline), and 200mg of intravenous hydrocortisone. The nasogastric tube was removed, and his breathing was then supported with CPAP via a Mapleson C circuit and 100% oxygen. Direct examination of his airway, and indirect nasendoscopy with a Storz fibre-optic scope were performed. Significant bruising of his soft palate was seen, in addition to bruising and oedema of the soft tissues around the arytenoid cartilages with a small haematoma within the valleculae [Figure 1]. After approximately twenty minutes his stridor settled.
He was transferred to a level 2 high dependency unit later that day. He did not suffer from any further airway compromise, and his symptoms completely resolved.
Discussion
The insertion of an NG tube, whilst often deemed low risk, may result in life threatening consequences4. There even exists a “NG tube syndrome” 5, the pathophysiological mechanism of which is thought to be paresis of the posterior cricoarytenoid muscles secondary to ulceration and infection over the posterior lamina of the cricoid. There is no doubt that insertion of an NG tube in the anaesthetised patient can prove to be difficult, with a failure rate of up to 50% on first pass6, with repeated attempts at insertion being required. However, this case highlights the need for a controlled and ordered approach to managing the difficulties that can be encountered. Medical training incorporates NG insertion as a basic skill within the curriculum, but this affords new anaesthetic trainees little help with the anaesthetised and intubated patient.
There are several techniques described to insert NG tubes in anaesthetised, intubated patients7. There is evidence to suggest that modified techniques such as a reverse Sellick’s manoeuvre or neck flexion with lateral pressure are better than blind insertion in the neutral position. In the right hands, insertion under direct vision can overcome most difficulties, but is again not without risk.
We feel it is important to remember that NG insertion can cause patient harm, and potentially lead to life threatening consequences. Whatever the approach or technique that is chosen, having a logical and ordered approach is paramount. Using suitable alternative methods for insertion, or abandoning the procedure, as opposed to blindly continuing to repeat the same unsuccessful method must be key for success, as would be the case for approaching any clinical encounter.
Published with the written consent of the patient.
The closure of asylums in the last century has resulted in an increased number of compulsory hospital admissions for psychiatric patients. Psycho-geriatric patients are highly vulnerable in this respect. Although the traditional buildings instituted for the care of the mentally afflicted have gone, misconceptions about provision and anecdotes about incarceration continue to haunt the community. Recent legislative changes have further extended the occurrence of involuntary hospital admission.1 Compulsory community care is under constant review. Concurrently the validity of the concept of mental illness, psychiatric classification and diagnostic dilemmas all continue to be debated. Confinement has regained respectability in the discourses of present-day British mental health system because of violent offences committed by psychiatric patients and the public media portraying them as a reflection of failure of community care.
Table 1, Advantages of Mental Health Acts
1. Mental health acts secure the safety of vulnerable people 2. Helps to regain control on their lives 3. Compulsory treatment helps to prevent further deterioration of mental health 4. Aimed to provide effective care and treatments 5. Ensure better after care 6. Protects the safety of other people 7. Prevents suicides 8. Provides opportunities for assessments and diagnosis 9. Can be therapeutic by unburdening personal responsibilities to an institution.
Numerical quantitative studies imply that generally involuntarily admitted patients show clinical improvement and retrospectively view their compulsory admission rather positively.2 It is an unquestionable fact that Mental Health Acts prevent suicides and homicides (table 1). Mental Health Acts have some unsatisfactory outcomes particularly on a subset of patients including senior citizens admitted formally. It is important to identify such patients and take additional precautions in their management as they run the risk of leaving hospital feeling inferior and inadequate. Patients’ specific characteristics, thought processes and past treatment experiences, colour their attitude towards coerced treatment and determine the gains and shortcomings of compulsory care.
Disadvantages
The Mental Health Acts are open to social abuse and elderly patients can be more defenceless in this respect. Specifically they may be: invoked to control behaviour; misused for material gain and implicated in subtle expressions of revenge. They are sometimes invoked to hasten divorce proceedings and to secure the custody of children by a specific parent. They are also used to control the behaviour of children by their parents. Mental Health Acts designed to control psychiatric patients are being enacted and enforced in some underdeveloped countries that lack an efficient tribunal system to monitor their effects.
A patient who has been detained is at risk of repeat detention and someone who has been inappropriately assessed becomes increasingly vulnerable to control on psychiatric grounds. The experience of being detained involuntarily has a reductive effect on behaviour after discharge – it may induce anxiety or post-psychiatric depression. The awareness of being deemed to require compulsory detention generates such negative attitudes as self-denigration, fear and unhealthy repression of anger. It may also impede self-direction and the normal sense of internal control and may encourage the view that in a world perceived as being divided into camps of mutually exclusive ‘normal’ and ‘abnormal’ people, the patient is in the latter category. Compulsory detention may lead to suicide because the patient loses their sense of integration within their own society. Furthermore, the fear and anxiety associated with involuntary admission delays the recovery process. There are other frequently occurring barriers to recovery for those affected such as, loss of capabilities, whether real or imagined, ineffectual medication due to poor elicitation of symptoms because of patient’s lack of cooperation and negative drug side effects.
Depressed patients have a higher suicide risk than the population at large and one of the reasons for detention is suicidality. Some of the subjective symptoms of depression can be ameliorated by denying them, while compulsory detention may reinforce depressive symptoms. Detention gives carers a false sense of security and this may lead them to relax their vigilance towards the patient. The Mental Health Acts increase the stigma associated with psychiatric illness and with the exuberant expression of emotions. Patients who are under section or are frightened of being placed under section may deliberately mask their symptoms in an attempt to have the section lifted or to avoid sectioning.
Trans-cultural studies
Trans-cultural studies show that members of migrated cultures, particularly the elderly, are more at risk of inappropriate sectioning than the rest of the population because of the lack of knowledge on part of professionals about the patient’s culture. For instance, the debate of over diagnosis of schizophrenia among Afro-Caribbean patients is still unsettled. A study conducted in South London has concluded that Black Africans and Black Afro-Caribbean patients with psychosis in that area are more likely than White patients to be detained under the Mental Health Act 1983.3 GreatBritain has become a multicultural society and a significant percentage of professionals working in psychiatric units have been trained overseas, in a wide range of countries. This creates further risk of inappropriate diagnosis. There needs to be more emphasis on the significance of trans-cultural psychiatry in the United Kingdom. In particular, psychiatrists should be aware that psychiatry is a medicine of language and culture as well as of the mind.
Medical Dilemmas
Countries in which Mental Health Acts are widely enforced have not achieved any reduction in suicide rates through their implementation. Sectioning is perceived by many patients affected as a psychological guillotine – a form of psychiatric terrorism. The medical profession is invested by the Acts with undue power over society. This is of particular concern because training in psychiatry does not include the study of free will and allied philosophical issues and also because there is no clear definition or description of mind and consciousness. In psychiatry there is a lack of clinical indicators and psycho-physiological parameters so the criteria for diagnosis are imprecise, with a concomitant risk of the Acts being erroneously implemented. It has been postulated that once a person has been classified as having deviant behaviour, that categorisation has a potent effect on the subsequent actions of the person concerned and those interacting with them.
Is it not justifiable to argue that even if a few mentally ill patients are underdiagnosed and not subjected to psychiatric admission, someone whom we would regard as normal should not be detained in a psychiatric hospital against their will? Such a view is analogous to the judicial view regarding capital punishment where even if ninety-nine murderers escape capital punishment because there is no death penalty, one innocent person should not be sentenced to death. Mental Health Acts may be a necessary evil but they present a dilemma for mental health professionals: the morality of helping patients and protecting the society from the consequences of their illness against the immorality of restricting their freedom. Clinicians become torn between the ideals of curing mental illness and defending the sanity of patients.
Patients’ Perception
A small survey conducted by the author revealed that no sectioned patient in the group studied sent a thank-you card after discharge to the ward in which they were confined. However, many voluntary patients expressed appreciation in that way. This is an indicator of the attitude of sectioned patients towards the Mental Health Acts. One reason must be that a record of being sectioned limits their freedom to travel and also affects their employment opportunities adversely. A patient has commented that it is easier for an ex-convict to gain employment than it is for a once-sectioned psychiatric patient.
There is anecdotal evidence illustrating the panic that may be generated with the word ‘section’ in psychiatric patients. A recovering elderly hypomanic patient explained that he misconstrued the word on hearing it when he was ill, taking it in relation to sectioning in Obstetrics and General Surgery. He remembered that as he resisted entering a taxi while being persuaded to agree to admission, the driver said that he was going to be sectioned if he refused hospital admission. The patient misunderstood this and interpreted it as he was going to be cut into pieces and tried to jump out of the vehicle.
Post-traumatic stress disorder
Any loss of intrinsic importance to an individual constitutes bereavement. Denial, anger and depression experienced in compulsory detention are comparable to bereavement.4 In the case of a detained patient, the loss of self-identity and of social functioning causes a grief reaction. It has been hypothesised that there are high levels of Post-traumatic Stress Disorder (PTSD) symptoms in detained patients.5 Very few repeat detainees become habituated to the implementation of the Mental Health Acts. The vast majority become increasingly frustrated and develop a pessimistic outlook towards their mental health. There is a high incidence of suicide among patients who have multiple detentions.
Post-hospitalisation Stress Disorder is much more common than generally recognised. Formal admission may lead to fear, anger, frustration, depression or loss of self-esteem, depending upon the individual’s psychological response.6 Involuntary admission may result in pervasive distress in any patient – this kind of hospital admission may be perceived as threatening and even as a catastrophe. Detained psychotic patients are less aware of their environment because of the preoccupation with their symptoms. Non-psychotic patients, when detained for instance because of a risk of suicide, are fully aware of their immediate environment and the chaos they have caused to themselves. They have a high risk of PTSD.
Preventive detention
Fear of liability may lead to compulsory hospitalisation solely to prevent violence on the part of patients who otherwise do not require in-patient care.6 Psychiatrists are not trained to police society and may lack sufficient knowledge and experience to participate in the social control responsibilities that are part of the remit of the criminal justice system - they are sometimes involved in that function. Psychiatry has to be safe and secure in the hands of individual psychiatrists and psychiatrists have to be protected when practising psychiatry. Mentally ill patients are sometimes mistakenly processed through the criminal justice system rather than the mental health system. When that happens, compulsory detention may be perceived as a form of criminalisation of mental illness. Unless there is scrupulous monitoring, mandatory treatment impinges on civil liberties. Preventive detention is legally ambiguous and clinically impractical.
Assessment
Amongst the government’s fundamental powers and responsibilities are, protecting people from injury by another and caring for less able people, whether physically or mentally incapacitated. These functions encompass the welfare and safety of both the individual concerned and the public.
A decision about compulsory detention is made on the basis of three considerations: loss of emotional control; psychotic disorder and impulsivity with serious thoughts, threats or plans to kill oneself or others. Any perceived risk must be imminent and provocative. The clinician is legally required to determine the least restrictive environment to which a patient may be safely assigned for continued care. To fulfil these requirements while implementing the Mental Health Acts, a psychiatrist needs the skills of a physician, lawyer, judge, detective, social worker and philosopher. The decision-making process is influenced by multiple factors such as: the clinician’s propensity to detain patients; the record of past untoward incidents involving the patient; attitude towards risk taking and availability of hospital beds and alternative safe treatment facilities. It is regrettable that in section 5(2) assessments, often it is a junior doctor, the least experienced person in the team, who is called upon to conduct the evaluation.4 A multitude of interviews with mental health staff, a social worker and solicitor will have to be endured by the patient - these are regarded as ordeals by most of them.
Non-detainable patients
Since the introduction of the Mental Capacity Act 2005, the number of assessments that are followed by a decision against compulsory detention is increasing. Patients who are assessed for formal admission but not found to be detainable may develop new risks subsequently as a result of the assessment procedure itself. Before assessment, mental health professionals may place themselves in covert locations around the patient’s house and neighbours may watch eagerly behind their curtains. Thereby the patient’s social image is damaged. After meticulous assessment, it may be a relief for the patient that they are not detained and that euphoria may continue for a short while but all too often damage has been done. The patient who is tormented by psycho-social stressors may find the assessment experience intensifies the injury. The decision about whether it is appropriate to assess someone is therefore an area in which more clarification and some management guidelines are much needed. In situations such as these, untoward incidents have been periodically reported. That may mean that the professionals involved and perhaps also family members who initiated the assessment, blame themselves and endure severe guilt feelings or blame each other. Furthermore, psychiatrists are not mind readers. It is possible that a patient will cleverly deny any suicidal intent during assessment, intending to fulfil a suicidal urge afterwards and that may falsely appear to be a reaction to the assessment. An interview for assessment may be the factor that takes them beyond their limit. Because of all these circumstances, the patient may need intensive home care and counselling after an assessment that does not lead to hospital care. In addition to treating mental illness, it is the duty of the psychiatrist to defend the sanity of patients. The difficulty of defining normalcy is notorious: it is easier to detect psychiatric symptoms than to describe normal behaviour.
Tribunals
Mental health tribunals are demanding and may be humiliating and intimidating. They are highly stressful for the patient and clinician and they involve the breach of patient confidentiality. Tribunals are often emotionally charged scenes for the patient and psychiatrist, they may result in traumatisation. The largely professional make-up of a tribunal is often perceived as intimidating by patients, who tend to be suspicious of collusion between professionals and above all of their reluctance to challenge the decision of a psychiatrist.7 Psychiatrists who are aware of legal profession’s ignorance on psychiatric issues dominate the tribunal scene by flamboyant linguistic expressions, while lawyers question the objectivity of psychiatry and the expertise of psychiatrists in legal matters. Tribunals are concerned with the legality of detention and not with the appropriateness of treatment. However, one study has shown that patients who appear before tribunals find it easy to accept they require compulsory admission. 8 Psycho-geriatric patients find it extremely distressing to attend tribunals. Hospital managers’ review hearings are often arranged and carried out promptly. Managerial hearings involve local people too which may make them less intimidating for detained patients.
Involuntary treatment
Although mental health staff usually have the best of intentions, when mandatory treatment is applied to patients it may prove traumatic and counter-therapeutic. The experience of undergoing forced treatment adds to the patient’s perception of stigma and discrimination. Involuntary psychiatric drug treatment is bound to be less effective than voluntary treatment. An outcome may be misdiagnosis, long-lasting and disabling side effects. Forced treatment potentially violates a person’s right to respect private life under Article 8 of the European Convention on Human Rights. Article 8 is violated only if patients can prove the treatment given is more harmful than the claimed therapeutic benefits, yet the clinician can administer the treatment if he thinks it is therapeutically necessary. Compulsory treatment makes patients feel infantilised, especially because forced psychiatric treatment often involves coercion, emotional intimidation, bullying and threats.
Community Treatment Order (CTO) is being constantly evaluated in terms of its merits and demerits. The results have been inconclusive and warrant more systematic studies. It was Section 41 of the Mental Health Act that inspired the introduction of CTO - the main purpose being to protect the community from the aggressive behaviour of some of the psychiatric patients as in the case of the successful Section 41. There are indications that CTO has fulfilled such a goal. It was also targeted to enhance compliance and concordance with the mental health services and to prevent suicides but studies indicate that those goals were not achieved.9,10 The Oxford Community Treatment Order Evaluation Trial (OCTET) substantiates a lack of any evident advantage in dropping relapse.
The “knee jerk” reaction from part of community service has apparently resulted in spontaneous readmissions of patients under CTO. It has also contributed to prolonged detention of patients awaiting community placement under CTO. This is because detained patients must stay on section 3 or 37 to allow the Mental Health Act to be converted to CTO upon discharge. Obviously, such a scenario curtails liberty. Patients always feel bitter about the “hanging feelings” of continued detention. Coercion runs the risk of weakening therapeutic alliance. It may be true that if fewer conditions are imposed, CTO could serve as a “memory knot” for patients with limited insight. Despite all the controversies surrounding the benefits of CTO, its use is increasing worldwide. 11
Assertive Human Rights
All human beings have individual rights and mental health professionals in particular must be mindful of those rights. Table 2 presents the list of assertive human rights, as modified from Gael Lindenfield (2001). 12
Table 2, Assertive Human Rights
1. The right to ask for what we want (realising that the other person has the right to say “No”). 2. The right to have an opinion, feelings and emotions and to express them appropriately. 3. The right to make statements which have no logical basis and which we do not have to justify (e.g. intuitive ideas and comments). 4. The right to make our own decisions and to cope with the consequences. 5. The right to choose whether or not to get involved in the problems of someone else. 6. The right to know about something and not to understand. 7. The right to be successful. 8. The right to make mistakes. 9. The right to change your mind. 10. The right to privacy. 11. The right to be alone and independent. 12. The right to change ourselves and be assertive people. 13. The right to be neutral. 14. The right to be empathetic and apathetic.
Discussion
Community care is more innovative than compulsory detention in hospital. For majority of patients, the best way forward is having high quality home treatment facilities as it is least restrictive and using compulsory detention should be the last resort. In some cases, forced psychiatric admission is indicative of failure in the supply of quality home treatment.One thing that sometimes leads to in-patient admission is lack of confidence in the service available. The perception of home treatment may be at fault here - it needs to be understood as more than merely staying outside hospital. Forensic patients and treatment resistant psychotic disorders lacking insight may be a different state of affairs. CTOs have serious impact on the autonomy and privacy interests of individuals and should not be applied to compensate for under-resourced community services.
Caring and supportive relationships between mental health staff and patients during involuntary in-patient care have considerable bearing on the outcome of compulsory detention. A recent study has revealed that among patients who have been detained involuntarily, perceptions of self are related to the relationships with mental health professionals during their inpatient stay. 13 Perceived coercion at admission predicts poor engagement with mental health staff in community follows up. When professionals demonstrate their genuineness and encourage patient participation in the treatment options, coercive treatment would be perceived as less of an infringement to the autonomy of patients and their sense of self-value. 14 If patients maintain both positive and negative views about detention, interventions should be designed to enhance positive experiences by focussing on respect and autonomy.Patients admit only compulsory detention gave them an opportunity to receive medication in a time of crisis and report it did not necessarily prevent thoughts relating to self-harm. It simply reduced the opportunities for impulsive acts.
‘Rooming-in’ is worth debating as an alternative to compulsory detention. This is the voluntary participation of so-called confidants, who may be chosen family members or trusted friends. They provide a 24-hour vigil for the patient in a safe hospital environment. An Australian study has suggested this system is highly valued by nursing staff, patients and their families.15 It is an initiative that needs further testing and evaluation. The resolution of angry feelings towards the mental health professionals has a significant bearing on their future compliance. The post-detention period tests the attention given to patients by mental health professionals. Here the staff members have to take the initial steps required to repair damaged relationships which may have developed in particular with angry patients. Detained patients should be offered counselling in post-discharge follow-ups and should be given satisfactory explanation of the circumstances for formal admission. Detained patients should be given the support to enable them to: rewrite their life story; reconstruct a sense of self; achieve healing of the assault of their illness and the treatment procedures inflicted on their personality. Specific interventions should be designed and evaluated in order to deal with any unresolved PTSD symptoms relating to formal psychiatric admission.
Bednar tumour, first described by Bednar in 1957, is a pigmented variant of dermatofibrosarcoma protuberans (DFSP).It is a rare entity, constituting1 to 5% of all DFSPs, which in turn, represent 0.1% of skin malignancies. It differs from DFSP by the presence of dendritic cells containing melanin, interspersed between the fusiform cells characteristic of DFSP. The most frequent location is in the trunk followed by upper and lower extremities and the head and neck region. We report a case of Bednar tumour occurring on the shoulder of a 29-year-old male.
Case report
A 29 year old male patient presented with a slow-growing swelling on his left shoulder for the past two years. Physical examination revealed a large, nodular, subcutaneous mass measuring 8x7 cm in left suprascapular region. Clinical impression was of soft tissue tumour and total resection with 3-cm margins was performed. Grossly, tumour measured 9x4.5 cm, with grey white to grey black cut surface (Figure 1a, 1b). Microscopy showed spindled cells arranged in a tight storiform pattern admixed with scattered heavily pigmented cells (Figure 2). On immunohistochemistry, tumour cells were positive for vimentin and CD34 (Figure 3a, 3b) and negative for S100, SMA and desmin. Pigmented cells were found positive for S100 and HMB 45 (Figure 4a, 4b) and negative for other markers. Thus, a final diagnosis of Bednar tumour was rendered.
Figure 1 - Gross appearance of the tumour with cut surface grey white to grey black
Figure 2 - Spindle cells in storiform pattern admixed with scattered heavily pigmented cells
Figure 3 - Tumour cells were positive for vimentin (3a) and CD34 (3b)
Figure 4 - Pigmented cells showed positivity for S100 (4a) and HMB 45 (4b)
Discussion
Dermatofibrosarcoma protuberans (DFSP) is a locally aggressive soft tissue neoplasm with intermediate malignant potential, regarded as a low grade sarcoma by the WHO classification of tumours of the skin.1,2,3
It was first described by Darier and Ferrand as a distinct cutaneous disease entity called progressive and recurring dermatofibroma in 1924. The term dermatofibrosarcoma protuberans was officially coined in 1925 by Hoffman.4
Histopathologically, DFSP is characterised by irregular, short, intersecting fascicles of tumour cells arranged in a characteristic storiform pattern. Cells have spindle-shaped nuclei which are embedded fairly uniformly in a collagenous stroma. There are several histological variants of DFSP. These include Bednar tumour, fibrosarcomatous, fibrosarcomatous with myoid/myofibroblastic change, myxoid, granular cell, palisaded, giant cell fibroblastoma, combined and indeterminate.5
Pigmented DFSP (Bednar tumour), first designated as storiform neurofibroma by Bednar, is a clinically and morphologicallydistinct variant of DFSP, constituting 5%-10% of all cases of DFSP.1,5 Clinical presentation is in the form of erythematous blue or brown coloured plaque lesions, with a smooth or irregular surface, often adhering to the deep tissue. The tumour may be exophytic, nodular or multilobulated and is generally firm in consistency.2 The lesions present as a slow growth, over a period of months or years. They have been described in all ethnic groups, with preponderance in blacks. They occur in third and fourth decades of life, however they may also occur in infancy and show a slight male predominance.1,5
The histogenesis of Bednar tumour is controversial. Some authors regard these tumours as being of neuroectodermal origin because of the presence of dendritic melanocytes and cells suggestive of Schwannian differentiation; while others believe attribute the origin to various kinds of local traumas, such as previous burns, vaccination scars, insect bites or vaccination such as BCG.3,6
Histopathologically, Bednar tumour is characterised by scattered melanosome-containing dendritic cells within an otherwise typical DFSP. The number of melanin-containing cells varies from case to case. Abundant pigmented dendritic cells can cause black discoloration of the tumor, whereas scant pigmented cells can be only identified microscopically.3,5 They grow invasively into the dermis and may reach the subcutaneous strata, fascia and musculature, in a manner similar to that of dermatofibrosarcoma protuberans. Occasionally, Bednar tumour may undergo fibrosarcomatous transformation with rare examples of pulmonary metastasis.5 Wang et al have reported a case of Bednar tumour with prominent meningothelial- like whorls.7
Immunohistochemically, most of the tumour cells stain positively with CD 34 and vimentin, and are negative for neuron-specific enolase, HMB-45 and protein S-100. However, cells containing melanin are positive for the usual melanocytic markers such as S-100 protein.5 On electron microscopy, three populations of cells have been identified, most of the cells being represented by fibroblasts. The second population exhibits fine elongations, enclosed in basal membrane while the third population consists of dendritic cells containing melanosomes and premelanosomes.3,5
The differential diagnoses include pigmented (melanotic) neurofibroma, psammomatous melanotic schwannoma, and desmoplastic (neurotrophic) melanoma. Pigmented neurofibroma can be differentiated from Bednar tumour by more extensive storiform growth and strong positivity for CD34 in latter. Psammamatous melanotic schwannoma is circumscribed, heavily pigmented with psammoma bodies, tumour cells being S- 100 positive and CD34 negative. Desmoplastic melanoma shows junctional activity and neurotropism.
Treatment consists of complete excision of the tumour with maximum preservation of normal tissue to maintain function and for optimal cosmesis. Moh’s Micrographic Surgery (MMS) or staged wide excision “Slow Moh’s “(with formal histopathological sectioning and delayed reconstruction for complete circumferential peripheral and deep margin assessment) has become the standard surgical treatment for DFSP.6,8
Bednar tumour presents a diagnostic challenge to the clinician because of resemblance to other commonly occurring pigmented lesions. Histopathological and immunohistochemical examination are necessary to arrive at the correct diagnosis.
The increasing collaboration between diabetologists and general practitioners (GPs) (e.g. the IGEA project) has resulted in the GP taking a more relevant role in management of patients with diabetes. Just as measurement of arterial blood pressure has become an important tool in follow-up of patients with hypertension by the GP, SMBG has become a valuable tool to evaluate glycaemic control. In particular, self-monitoring of both blood pressure and glycaemia are important to assess the efficacy of prescribed therapies, and can help the patient to better understand the importance of control of blood pressure and blood glucose.
Several instruments for measurement of blood pressure have been validated by important medical societies involved in hypertension, and much effort has been given to compliance and patient comfort. However, less attention has been dedicated to glucometers. In particular, little consideration has been given to patient compliance, and SMBG is often perceived as an agonising experience. Moreover, hourly pre-visit glucose curves for glycaemic control, even if important, do not have the same value as a standard control over 2 to 3 months between visits. In addition, after an initial period of "enthusiasm" the fear and hassle of pricking oneself and the unpleasant feeling of pain often cause the patient to abandon SMBG.
A literature search on PubMed using the term “self-measurement of blood glucose (SMBG) and pain” retrieved only two publications, demonstrating a general lack of interest of the medical community. However, SMBG can be of important diagnostic-therapeutic value. Pain related to skin pricks on the fingertip, needed for determination of glucometric blood glucose, can significantly reduce compliance to SMBG, thus depriving the physician of a useful tool for monitoring the efficacy of anti-hyperglycaemic therapy and glycaemic control. Moreover, HbA1c has clear limitations, even if it provides a good idea of glycaemic control over the past 2-3 months, as it is a mean value of pre- and post-prandial blood glucose. It does not, therefore, measure glycaemic variability, which is an important cardiovascular risk factor. Thus, more research is needed into puncture sites as an alternative to the fingertip that are associated with less pain, which could favour greater use of SMBG.
Another problem of significant importance concerns the reproducibility and accuracy of blood glucose measurements. In the traditional method, blood samples for self-monitoring are taken from the fingertip of any finger using a lancing device with a semi-rigid prick (Figs. 1 and 2). The large blood vessels in the derma of the fingertip (Fig. 3) are lanced, and a drop of blood is obtained for the glucometer. All lances are optimised to prick the skin at a depth greater than 0.5 mm with a variability of ± 0.2 mm (Fig. 4).
Figure 1. The fingertip as a traditional site of puncture using a lancet.
Figure 2. Traditional method for self-monitoring of blood glucose.
Figure 3. Vascularisation of derma.
Figure 4. Traditional lancet.
Unfortunately, by pricking the fingertip at this depth, numerous tactile corpuscles in the dermis are also touched, causing the unpleasant sensation of pain. In a recent study by Koschinsky1 on around 1000 patients with type 1 (T1D) and type 2 diabetes (T2D), about one-half (51%) referred that they normally pricked themselves on the side of the fingertip because it is less painful. However, almost one-third (31%) used the centre of the fingertip, which is the site associated with the most pain. Other sites of puncture on the fingers are used much less frequently (5%), while 12% used other places on the body. It is also interesting to see how many times patients reused the lancet: 10% once, 19% for 2-4 times, 22% for 5-7 times, 25% for 8-10 times and 21% for more than 11 times. Pricking oneself 2 several times daily for years is not only troublesome for patients, but also leads to the formation of scars and callouses, and reduces fingertip perception and tactile sensitivity. Notwithstanding, alternative sites of puncture such as the arm, forearm and abdomen have not been evaluated in a systemic manner.
The objective of the present study is to compare alternative sites of puncture using a new semi-rigid lancet and determine if blood glucose values are similar to those obtained using traditional methods. A new puncture site was chosen, namely the area proximal to the nail bed of each finger. The sensation associated with puncture (with or without pain) was used to compare the two groups. Pain was assessed with a visual analogue scale (VAS). Blood glucose was measured in the morning after 12 hours of fasting.
Materials and methods
The present study enrolled 5 general practitioners and 70 patients with diabetes and without diabetes-related (micro-albuminuria, retinopathy, arterial disease of the lower limbs) complications. In addition, patients with diabetic neuropathy or neurological/vascular complications that could alter pain perception were excluded. The study population was composed of 20 women and 50 men with a mean age of 47.8 ± 15.3 years and a mean duration of diabetes of 11.4 ± 10.3 years; 34.3% had T1D and 65.7% had T2D. The study was carried out according to the standards of Good Clinical Practice and the Declaration of Helsinki. All patients provided signed informed consent for participation.
Semi-rigid lancets were provided by Terumo Corporation (Tokyo, Japan) and consisted in a 23-gauge needle that was remodelled to permit less painful puncture than a traditional lancet (Fig. 5). Punctures (nominal penetration from 0.2 to 0.6 mm) were made at a depth variation of ± 0.13 mm. In addition, a novel puncture site was used, namely the area proximal to the nail bed of each finger (Figs. 6-8). In this area of the finger, blood flow is abundant and it is easy to obtain a blood sample. Moreover, the area has fewer tactile and pain receptors than the fingertip, and thus when lanced less pain is produced.
Six fingers were used in a random fashion to evaluate puncture of the anterior part of the finger, the periungual zone and the lateral area of the fingertip (depth 0.2-0.6 mm), and compared to fingertip puncture at a depth of 0.6 mm. The sensation provoked by puncture (with or without pain) was used to compare groups. Pain was evaluated using a VAS ranging from ‘no pain’ to ‘worst pain imaginable’. The VAS is a unidimensional tool that quantifies the subjective sensation such as pain felt by the patient and considers physical, psychological and spiritual variables without distinguishing the impact of the different components.
Figure 5. New lancet
Figure 6. Proximal lateral area of the nail bed as a new site of puncture.
Figure 7. Method of lancing.
Figure 8. Site of lancing
Blood glucose was measured in the morning after 12 hours of fasting. The Fine Touch glucometer used was provided by Terumo Corporation (Tokyo, Japan). Statistical analysis was carried out using Fisher’s two-sided test. Differences in blood glucose with the two methods were analysed using Wilcoxon matched pairs signed ranks test. A P value <0.05 was considered statistically significant.
Results
Pain was not perceived in 90% and 94.28% of subjects punctured in the lateral area of the fingertip at a depth of 0.2 and 0.3 mm, respectively. At a depth of 0.4 mm, 67.14% of subjects did not perceive pain, while at 0.5 mm and 0.6 mm, 47.14% and 17.14% of subjects did not feel pain, respectively. There was no significant difference in pain considering punctures at 0.2 or 0.3 mm, while significant differences were seen between 0.2 and 0.4 mm (p <0.05), 0.5 mm (p <0.001) and 0.6 mm (p <0.001). All subjects who performed puncture in the central zone of the fingertip referred a painful sensation.
Using a periungual puncture site, pain was not referred by any subject, although a bothersome sensation was noted by some. The same results were obtained for all fingers used. Blood glucose levels obtained using traditional and alternative puncture sites were highly similar with no significant differences between groups (134.18 mg/dl ± 5.15 vs. 135.18 mg/dl ± 5.71 mg/dl; p = 0.5957).
Discussion
The present study evaluated the use of alternative puncture sites that are associated with less pain. These encouraging results undoubtedly warrant further investigation in a larger cohort, but nonetheless suggest that compliance with SMBG can be optimised. The use of the area close to the nail bed allowed high quality blood samples to be obtained for measurement of blood glucose, with an accuracy that was the same as that seen using the fingertip. The design of the lancet used herein was also associated with a lower perception of pain, which is composed of a hypodermic needle in a rigid casing that prevents accidental needle sticks both before and after use. Thanks to the needle point that was made using a triple-bevel cut, epidermal penetration is less traumatic and as a consequence less painful. This further favours rapid recovery of tactile function of patients with T2D. This allows the use of a larger transversal section using a puncture with less depth, and less involvement of nerves present in skin. In addition, the characteristics of the novel lancing device (Fine Touch, Terumo Corporation, Tokyo, Japan) allows adjusting the depth of puncture to the characteristics of each patient (e.g. in children, adolescents and adults).
The depth of penetration of the lancet can be varied from 0.3 to 1.8 mm with a self-incorporated selector; the maximum deviation of the lancing device in terms of depth is approximately 0.1 mm. Due to the possibility to select a minimal depth of only 0.3 mm, it can be used at alternative sites that allow a reduction in the frequency of samples taken from the fingertip. In theory, compared to traditional lancets, this would allow less perception of pain even at traditional sites as well as at periungual zones, and it was our intention to compare the different types of lancets to reinforce this idea. No puncture-related complications were reported, and another fundamental aspect that is not reported in other studies comparing traditional and alternative puncture sites is that no differences in blood glucose were observed.
In conclusion, it is our belief that a new type of finger lancet that decreases or eliminates pain associated with lancing merits additional consideration. Further studies are warranted on larger patient cohorts to confirm the present results. If validated, this would enable patients with diabetes - especially those who need to take several daily blood glucose samples - to perform SMBG with greater peace of mind and less distress.
Claimants in medical negligence cases are increasingly making use of negligent failure to warn of risk in claims for compensation following medical mishaps when an inherent risk in a medical procedure has manifested itself resulting in injury. In order to succeed the claimant must establish firstly that the failure to warn was negligent and secondly that the negligence has caused a loss. This paper focuses on causation in failure to inform cases but briefly considers the shift in judicial attitudes to the requirement to give warnings in order to explain how the duty to inform and the available remedies have diverged.
Members of the medical profession commonly believe that to find a negligent failure to inform has caused a loss to the claimant a court must be satisfied that the patient would not have consented to the treatment had they been told of the risk. This was probably true until 2004 when the House of Lords came to a surprising decision which has since received a mixed reception.
The Changing nature of the requirement to give warnings
In the early days of medical litigation whether non-disclosure amounted to negligence was left to the standards of the medical profession. A medical professional was under a duty to at least equal the standards of a reasonably skilled and competent doctor; this would be assumed if s/he had acted in accordance with a body of professional opinion. This is referred to as the Bolam test.[i] There was disquiet amongst academic lawyers that doctors were being allowed to set their own standards and over time the courts have been wrestling back control.[ii],[iii] Following the Recent Supreme Court ruling in Montgomery[iv] there is now no doubt that patient autonomy is paramount and the need to inform will now be judged by reference to a reasonable person in the patient’s position.
In Montgomery the claimant, a diabetic, alleged she had been given negligent advice during her pregnancy. In particular she was not warned of risk of shoulder dystocia, the inability of the baby’s shoulders to pass through the pelvis, assessed at 9-10% for diabetic mothers and not informed of the possibility of delivery by elective caesarean section. The Consultant responsible for her care gave evidence (at paragraph 13) that she would not routinely advise diabetic mothers of this risk because if mentioned, “most women will actually say, ‘I’d rather have a caesarean section.’” The Supreme Court in finding (at paragraph 87) for the claimant held, “The doctor is therefore under a duty to take reasonable care to ensure that the patient is aware of any material risks involved in any recommended treatment, and of any reasonable alternative or variant treatments. The test of materiality is whether, in the circumstances of the particular case, a reasonable person in the patient’s position would be likely to attach significance to the risk, or the doctor is or should reasonably be aware that the particular patient would be likely to attach significance to it.” Although expressed as, “a duty to take reasonable care,” the medical professional is expected to, “ensure,” that the patient has the requisite knowledge. The test in failure to inform cases now focuses, not on the actions of the medical professional but, on the patient’s knowledge of the risks.
Chester v Afshar[v]
On 21st November 1994 Mr Afshar carried out a microdiscectomy at three disc levels on Miss Chester. There was no complication during the operation and the surgeon was satisfied that his objectives had been met. When Miss Chester regained consciousness she reported motor and sensory impairment below the level of L2. A laminectomy shortly after midnight the next day found no cause and the surgeon’s only explanation was cauda equine contusion during the retraction of the L3 root and cauda equine dura during the L2/L3 disc removal. During the legal proceedings Miss Chester brought against Mr Afshar it was found that the operation carried an unavoidable 1-2% risk of cauda equine syndrome (CES) and that the surgeon had not warned the patient about this risk. It was further found that, had the warning been given, Miss Chester would have sought a second (and possibly third) opinion meaning that the operation would not have taken place on 21st November.
The surgeon and the patient did not agree what was said about the risks of the operation before consent was obtained but the issue was decided in favour of the patient: the surgeon had failed to give a proper warning about the risk of CES. In order to succeed in her claim Miss Chester needed to establish that this failure had caused her loss but her lawyers did not argue that she would have refused consent if she had been informed. They took a different approach; the 1-2% risk of CES is not patient specific and is realised at random. If warned of the risk Miss Chester would have sought a second opinion meaning that the operation would have happened at a later date and possibly with a different surgeon. This subsequent operation would have carried the same 1-2% risk of CES. The High Court of Australia had previously accepted (in a different case) that the claimant can satisfy the burden by showing that, if informed, s/he would have chosen a different surgeon with a lower risk of adverse outcome but there was no evidence in this case that by choosing another surgeon Miss Chester could have reduced the risk.[vi]
At the time Mr Afshar failed to advise Miss Chester of the risks two paths should have been open to her. She could choose to have the operation with the defendant on 21st November which resulted in CES or to seek a second opinion and undergo the operation at a later date giving her a 98-99% (a better than balance of probabilities) chance of avoiding CES. Thus the failure to inform did not increase the 2% risk of CES but the court found, as a matter of fact, that it did cause the CES. Although the physical harm that Miss Chester had suffered (because of the inevitable risk) did not fall within the scope of the doctor’s duty to inform (to allow the patient to minimise risk) a majority of the House of Lords felt that the surgeon should be held liable because otherwise the patient would be left without a remedy for the violation of her right to make autonomous decisions about treatment.
There are two leaps in Chester the first is the notion that negligence causes a loss if it induces the claimant to follow a path with an associated risk that is realised when they could have followed another path with exactly the same risk. The second is that violating a patient’s right to make autonomous decisions should, as a matter of policy, make the surgeon liable for personal injury which happens after the patient is deprived of their right to make a decision about treatment. The next two paragraphs will consider these leaps in turn.
Equally risky paths: The first leap
In Wright[vii] the patient had developed a streptococcus pyogenes infection that had seeded into her proximal femur resulting in osteomyelitis. Her admission to hospital was delayed for two days by the defendant clinic’s negligent handling of her first presentation. On admission to hospital the patient had the additional misfortune to receive woefully inadequate treatment resulting in septic arthritis and permanently restricted mobility. The patient took the questionable decision to sue the clinic but not the hospital. One of the patient’s arguments against the clinic was that had she been admitted to hospital without the two day delay she would have been treated by different staff who would, almost certainly, not have been negligent. The claimant argued that, as in Chester, although the clinic’s negligence did not increase the random risk of receiving negligent hospital care it had, as a matter of fact, caused the negligent care. Lord Justice Elias rejected this suggestion precisely because the delay had not increased the risk that the hospital would provide the patient with inadequate treatment. However, the other members of the Court of Appeal found for the patient but for another reason; given two extra days the hospital would probably have realised their mistakes and been able to correct them before any permanent harm resulted.
Violated autonomy and personal injury: The second leap
There have been attempts to expand the scope of the majority reasoning in Chester. In Meiklejohn[viii] the patient was treated for suspected non-severe acquired apastic anaemia with Anti Lymphocyte Globulin and Prednisolone the latter causing an avascular necrosis. At an initial consultation a blood sample was taken from the patient for “research purposes” but possibly to exclude dyskeratosis congenital, the condition from which he was actually suffering. The patient argued he had not given informed written consent to the taking of a blood sample for research purposes and that had he been told about the uncertainty in the diagnosis he would have delayed treatment pending the result of the blood test or asked to have been treated with Oxymetholone instead. He further argued these violations of his autonomy required that he be given a remedy for the injury which had actually occurred through a reasonable misdiagnosis of his rare condition. Lady Justice Rafferty sitting in the Court of Appeal dismissed this argument stating at paragraph 34 that, “Reference to [Chester] does not advance the case for the Claimant since I cannot identify within it any decision of principle.”
Conclusion
Courts deciding failure to warn cases have shifted the emphasis from the reasonable practices of the medical profession to the autonomy of the patient; from the duty of the medical professional to the rights of the patient. Medical professionals are now required to give enough information to allow a reasonably prudent patient to make an informed decision about their own treatment. While this change has been taking place there has been no corresponding revision of the remedies available when a patient’s autonomy is infringed. If autonomous decision making is to be properly protected a remedy should be vested in every patient who has had their autonomy infringed whether or not that patient has suffered physical injury; autonomy infringements should be actionable per se (without proof of loss) and result in the award of a modest solatium (a small payment representing the loss of the right to make an informed decision about treatment.) Under the present arrangements the wrong that the patient complains of (infringement of autonomy) is not what they are seeking damages for (personal injury.)
In a small way, the court in Chester has sought to close this gap between the patient’s right and the remedy available by extending the existing law and widened the circumstances in which damages can be recovered by a patient following an infringement of autonomy. Medical professionals who fail to warn patients of small risks may be held liable if disclosing the risk might cause the patient to delay treatment while further deliberations take place. Paradoxically it could conceivably be argued that medical professionals who fail to disclose significant risks (greater than 50%) should escape liability because the loss was more likely than not to happen anyway!
Both Chester extensions to the law have been tested independently in Wright and Meiklejohn and rejected but this does not mean that it has been overruled. The two subsequent cases were heard by the Court of Appeal which cannot overrule the House of Lords (now the Supreme Court.) Both cases were distinguished meaning that the court was satisfied that they were not factually the same as Chester. Clearly Wright is not concerned with rights to autonomy and Meiklejohn is a failure to warn of uncertainties in diagnosis or failure to obtain written informed consent to research rather than risks inherent in treatment. If the facts of Chester were to come before the Courts again the decision would have to be the same; a surgeon could not necessarily escape liability by proving that, informed of the risk, the patient would have consented to the operation.
Summary points
Patients have a right to be informed of material risks inherent in medical treatment
An injured patient does not necessarily need to prove they would not have consented to the operation if the risks had been disclosed
A legal claim against a health care professional may be successful if the patient would have delayed the operation to a later date
This extension of the law has critics but the situation is unlikely to change in the near future
Drug abuse is a universal phenomenon and people have always sought mood or perception altering substances. Similarly the attitude of people towards addiction varies depending upon various factors and can come across as prohibition and condemnation to tolerance and treatment1. The United Nations Narcotics Bureau describes drug abuse as the worse epidemic in the global history 2.India like rest of the world has huge drug problem. Located between two prominent drug producing hubs in the world, i.e. Golden Triangle (Burma, Laos and Thailand) and Golden Crescent (Iran, Afghanistan and Pakistan), India acts as a natural transit zone and thus faces a major problem of drug trafficking. Similarly the geographic location of Jammu and Kashmir is such that the transit of drugs is easily possible across the state. In addition the prevailing turmoil is claimed to have worsened the drug abuse problem alongside an unusual increase in other psychiatric disorders in Kashmir 3.
There are not many studies about drug use from Kashmir and hardly any about the actual community prevalence. In addition, it is difficult to conduct a study in a community affected by drug abuse due to stigma associated with drug addiction. Furthermore people hesitate to volunteer information due to laws prohibiting sale and purchase of such substances and risk of being criminally charged. In view of this difficulty the present study was conducted on the treatment seeking patients at the Drug De-addiction Centers. The present study was aimed at highlighting the epidemiological profile and pattern of drug use in Kashmir Valley.
Material and Methods
This cross-sectional study was undertaken at two Drug De-addiction Treatment Centers (Government Psychiatric Disease Hospital and Police Hospital, Srinagar). Government Psychiatric Disease Hospital is the only psychiatric hospital in the Kashmir valley that also provides treatment for substance use disorder patients. The De-addiction center at the Police Hospital is run by Police Department in the capital city Srinagar. Both these centres have a huge catchment area comprising all districts of the valley, due to lack of such services outside the capital city, thus reflecting the community scenario to a greater extent.
The study was conducted for a period of one year from July 2010 to June 2011. Substance Use Disorder Patients were diagnosed as per the Diagnostic and Statistical Manual-IV (DSM IV 2004) criteria 4. Following informed consent, a total of 125 patients were included in the study. In case of minors (<18 years of age), the consent was obtained from the guardian. Information was collected regarding the age, sex, residence, religion, marital status, educational status, history of school dropout, occupation and type of family, reasons for starting the substance of abuse, type of the substance abused, and age of initiation. The socio-economic status of the patients was evaluated by using the modified Prasad’s scale for the year 2010, based on per capita income per month 5.
Results
A total of 125 Substance Use Disorder patients were studied and all were males. The majority of the patients (50.4%) were in the age group of 20-29 years and most (73.6%) were unmarried. Most of the patients were Muslims (96%). There was nearly an equal urban to rural ratio. Most of the patients had completed their educationup to high school level or higher. There was a high rate of school dropouts (41.7%) and among those, substance use being common reason (46%) for school dropout. 71.2% belonged to nuclear families. Most of the patients (53.6%) belonged to socio-economic class I as per Prasad’s scale [Table 1]. Majority of the patients started taking substances in the age group of 10-19 years [Table 2]. Besides nicotine (89.6%), the most common substances used were cannabis (48.8%), codeine (48%), propoxyphene (37.6%), alcohol (36.8%) and benzodiazepines (36%) [Table3].
Table 1: Socio-demographic profile
N
%
Age (years)
10 to 19
20
16.0
20 to 29
63
50.4
30 to 39
27
21.6
40 to 49
12
9.6
≥ 50
3
2.4
Gender
Male
125
100.0
Religion
Islam
120
96.0
Sikh
3
2.4
Hindu
2
1.6
Residence
Urban
56
44.8
Rural
69
55.2
Marital Status
Unmarried
92
73.6
Currently Married
27
21.6
Separated/Divorced
6
4.8
Education
Illiterate
5
4.0
</= high school
71
56.8
> high school
49
39.2
Occupation
Unemployed
21
16.8
Student
25
20.0
Government Job
16
12.8
Self employed
63
50.4
Type of family
Joint
36
28.8
Nuclear
89
71.2
Socio-economic status
Class I
67
53.6
Class II
36
28.8
Class III
18
14.4
Class IV
3
2.4
Class V
1
0.8
Table 2: Age at onset of initiation of Substance use by the patients seeking treatment for Substance Use disorder
Substance
< 10 years
10 to 19 years
> 19 years
N
%
N
%
n
%
Nicotine
11
9.8
86
76.8
15
13.4
Volatile Solvents
0
0
10
76.9
3
23.1
Cannabis
0
0
43
70.5
18
29.5
Codeine
0
0
33
55
27
45
Propoxyphene
0
0
24
51.1
23
48.9
Benzodiazepines
0
0
20
44.4
25
55.6
Alcohol
0
0
19
41.3
27
58.7
Table 3: Type of substance used by the patients seeking treatment for Substance Use disorder*
Table 4: Reason for starting the Substances among the patients seeking treatment for Substance Use disorder*
Reason
N
%
Peer Pressure
91
72.8
Relief from psychological stress**
49
39.2
Curiosity/Experimenting
27
21.6
Fun/Pleasure Seeking
13
10.4
Prescription medicine abuse***
12
9.6
Others****
6
4.8
*multiple responses ** (family tragedy like death or disease in the family; history of arrests, torture in jail or death and disability in the family due to the prevailing turmoil; conflicts within family; loss of job or job dissatisfaction. ***deliberate use of prescription medications for recreational purposes in order to achieve intoxicating or euphoric psychoactive effects, irrespective of prescription status ****Family history, routine work or boredom, availability.
Peer pressure was the most common (72.8%) reason for starting the use of substance [Table 4]. Majority of the patients started using substances in the age group of 10 to 19 years with 76.8% nicotine users, 76.9% volatile substances and 70.5% cannabis users among this group. The age of onset was higher (>19 years) in case of benzodiazepines and alcohol.
Discussion:
Kashmir Valley has a population of over 6 million with around 70% people living in rural areas.6
There is almost no data available on the community prevalence of drug use in the valley. Population is predominantly Muslim with strong taboo on use of alcohol and other drugs. Interestingly, none of the patients in our sample are female which could be due to stigma associated with drug use and hence reluctance to seek treatment. The police drug addiction centre is locally in the police lines with heavy security which requires frisking, which may also prevent people, especially women, from seeking help. This does not mean females do not use drugs as evident from clinical practice and previous studies 7. The sample is mostly comprised of a young age group of 20-29 years (50.4%) followed by 30-39 years (21.6%). Similar findings have been shown by the previous study conducted by Kadri et al.8 Another study on college going male students showed a prevalence of 37.5 %9, suggesting young age at initiation and high prevalence in students. The results also show high school dropout rate due to drug use which could be due to the associated problems with drug use and negative impact on the overall quality of life and future prospects.
There is a minor rural predominance in the sample. This is consistent with findings of Drug Abuse Monitoring System India and other studies 10-12, which reveal a nearly equal rural urban ratio with slight rural predominance. This could be due to the stigma associated with these centres and reluctance from local population to seek help due to fear of being identified and shamed.
73.6% of the patients were unmarried with 4.8% separated or divorced. Similar results have been shown by Hasin DS et al 13 and Martins SS et al14. The reason for predominant unmarried sample in our study could be due the higher number of younger age patients as compared to the current marriageable age.
The majority of the patients in our study were using cannabis, medicinal opioids (codeine and Propoxyphene), benzodiazepines and alcohol. One of the major reasons for high rate of opioids and benzodiazepines abuse in present study can be explained by over the counter sale of these drugs without the prescription from the doctor. This is a worrying trend as there is no proper drug control and it is easy to access any medication. Although there are only a few outlets selling alcohol in the whole of Kashmir, it is surprising how alcohol use is so common. It is speculated that current political turmoil may be responsible and people buy alcohol legally or illegally from army depots.
Most of the substance users had started taking drugs at the age of 10 to 19 years and more so in the case of nicotine, volatile substances and cannabis. Similar results have been found in the earlier studies. 15 Nicotine was typically the first substance of abuse. Tobacco is often considered as a gateway to other drugs 16.
The overall prevalence of volatile substance abuse in this study was 10.4% but significantly higher in the adolescent age group (53.8%). About three fourths of the patients had started using volatile solvents in the age group of 10-19 years. Inhalant use has been identified as most prevalent form of substance abuse among adolescents by different studies 17-18. The observation in present study could be explained by the easy accessibility, cheap price, faster onset of action, and a regular “high” with volatile substances like glues, paint thinners, nail polish removers, dry cleaning fluids, correction fluids, petrol, adhesives, varnishes, deodorants and hair sprays.
Peer pressure is the most common cause of initiation of drug use only to be followed by self-medication for psychological stress. Previous studies have shown similar results in relation to peer pressure and also the ongoing conflict situation to be responsible for increased drug use in the valley 19-20.
Conclusion:
There is a need for further studies to find the community prevalence of drug use. The service provision is very limited, restricted to the capital city and with none in the rural areas. There is a worrying trend of early age of initiation with adverse consequences including dropping out of school. The control of prescription drug use is another major issue which needs to be addressed. It is also worrying that female drug users are not able to seek help due to lack of appropriate facilities.
Multiple Sclerosis (MS) is an inflammatory, demyelinating disease of the central nervous system (CNS) that affects approximately 400,000 people in the United States and 2.5 million worldwide.1 There are rare variants of this disease that can profoundly delay diagnosis and treatment. Examples of such variants include: Tumefactive MS, Acute Disseminated Encephalomyelitis, Neuromyelitis Optica, Marburg’s MS and Balo Concentric Sclerosis.2 These variants have a uniquely aggressive presentation and do not exhibit classic MS features.2 Classic MS features include relapsing and remitting sensory and motor impairments, optic neuritis and pain. These aggressive variants are more likely to present with symptoms similar to neoplasm such as motor impairments and seizures. When dealing with these aggressive MS variants diagnostic options include Magnetic Resonance Imaging (MRI), Single Photon Emission Computed Tomography (SPECT) scan, MR spectroscopy and Cerebrospinal Fluid (CSF) analysis.3
Invasive tests such as brain biopsy are not warranted unless absolutely necessary. In MS, a biopsy must not be completed in order to confirm a diagnosis. However, to confirm a diagnosis of cancer a biopsy is required.
We present a rare case of Tumefactive MS that exhibited a clinical picture identical to brain metastasis. This was diagnosed with surveillance MRI and CSF analysis in the absence of a brain biopsy.
Case presentation
A 48-year-old African American female was brought in by emergency medical services after falling with a brief loss of consciousness. Associated symptoms included dull chest pain, diaphoresis and shortness of breath. While in the emergency department she also developed nausea, vomiting and dizziness. The patient reported no similar previous episodes and denied precipitating events. There was nothing else to note on review of systems. The past medical history included hypertension with no previous surgeries and family history included breast cancer of the mother diagnosed at age 47. The patient denied tobacco, alcohol and intravenous drug use. She noted an allergy to iodine.
On physical examination the patient was afebrile, normotensive and tachycardic with an oxygen saturation of 89% on room air. She was alert and oriented but pale and diaphoretic with mild left sided chest pain. Cardiac examination revealed a normal rhythm tachycardia and no murmurs were heard. Her neurological examination showed a normal mental status, normal cognition/comprehension and that Cranial Nerves II-XII were intact.
Laboratory findings included haemoglobin of 9.8 g/dL, 30.8% haematocrit and potassium of 3.3 mmol/L. Electrolytes were otherwise normal. Cardiac workup showed a normal ECG and slightly elevated cardiac enzymes of 0.319 ng/mL.
Given the patients tachycardia and desaturation, a stat Ventilation-perfusion (V/Q) scan was completed (patient had an iodine allergy). The V/Q scan revealed a perfusion defect suggesting pulmonary embolism (PE) as the cause of symptoms. Subsequently the patient was placed on appropriate anticoagulation.
Head CT (computed tomography) showed a left centrum semiovale round hypodense lesion measuring 1.4 cm, a left basal ganglia round hypodense lesion measuring 1.0 cm and a left occipital lobe round hypodense lesion measuring approximately 1.0 cm (Figure 1). No midline shift was seen. MRI showed multiple hypointense T1/hyperintense T2 nonenhancing lesions, mainly within the left cerebrum (Figure 2 A-F). The three largest lesions within the left posterior centrum semiovale (2A), left globus pallidus (2B) and left posterior corona radiata adjacent to the occipital horn (2C) measured 1.5 cm, 1.0 cm and 1.0 cm respectively. Perilesional vasogenic oedema was seen in all except the basal ganglia lesion. There were bilateral cerebral scattered foci of hyperintense FLAIR/T2 signals (D-F). The imaging suggested a differential diagnosis which included metastasis, infection or primary CNS malignancy.
Further work up in search for possible malignancy was completed. Skin map revealed no concerning nevi. Mammogram showed no tumor. CT of the abdomen and pelvis revealed a 2.6 cm indeterminate hypodense lesion in the left lobe of the liver (Figure 3A) along with an enlarged fibroid uterus (17x 7 x 14 cm). Liver biopsy was considered but a repeat MRI and ultrasound showed the lesion to be cystic, so this was deferred following surgical oncology recommendations (Figure 3B). For the hypertrophic uterus found on imaging, gynecology felt no further workup was necessary as they attributed the findings to a fibroid uterus.
Tumor markers CA 27-29, CA 19-9, CA 125 and AFP were all sent and came back negative. Initial lumbar puncture with CSF analysis was not completed secondary to possible complications that could be incurred while on necessary PE anticoagulation.
Due to a non-focal neurological examination, she was discharged on Levetiracetam 500 mg for seizure prophylaxis and Dexamethasone 4 mg for perilesional oedema. Over subsequent months the patient did well without headaches, vision changes or seizure like activity. On subsequent visits to the clinic, she had no evidence of focal neurological deficits except for mild bilateral symmetric hyperreflexia. Given that the metastatic work up remained negative, we considered obtaining a baseline Positron emission tomography (PET) scan to ensure we were not missing any possible metastasis.
She subsequently went back to work full-time and reported no symptoms. Repeat MRI of the head (Figure 4 A-C) showed predominantly T1 hypointense and T2 hyperintense (A-B) lesions with significant decrease in size from MRI done three months ago. These lesions demonstrated no enhancement to incomplete ring enhancement, with diminished vasogenic oedema (A). These findings suggested an inflammatory demyelinating process so a lumbar puncture was obtained after anticoagulation was held. CSF analysis was done using Isoelectric Focusing (IEF) and immunoblotting methodology. This revealed a normal myelin basic protein but with eight oligoclonal bands restricted to the CSF. These findings solidified the suspicion of Tumefactive MS.
Figure 1. Head CT without contrast: left centrum semiovale round hypodense lesions measures 1.4 cm with perilesional vasogenic edema
Figure 2. MRI of brain showing axial T1-weighted (A-C) hypodense lesions of the left centrum semiovale(A), left basal ganglia(B) and left occipital lobe(C). Axial T2-weighted (D-F) views show multiple hyperdense lesions corresponding to the same locations. Perilesional vasogenic edema is seen.
Figure 3A. Thorax CT without contrast. 2.6 cm left lobe liver lesion.
Figure 3B. MRI of abdomen showing coronal T2-weighted half-Fourier acquisition single-shot turbo spin-echo (HASTE) hyperdense lesion. A mildly enlarged liver measuring 18.7 cm in craniocaudal span. Simple 2.8 x2.4 cm cyct in the medial segment of left lobe.
Figure 4. MRI of brain (3 month after initial scans) showing axial T-2 weighted (A-B) hyperdense lesions of the left centrum semiovale(A) and left basal ganglia(B). There is irregular peripheral enhancement. Considerable decrease in size is seen from previous MRI (Figure 2). Left posterior centrum semiovale, left globus pallidus and left occipital lobe lesion measure 1.3 cm, <1 cm and <1 cm respectively. Vasogenic edema is diminished in comparison to previous study.
Discussion
Tumefactive MS lesions are defined as solitary demyelinating plaques greater than 2 cm.5 Lesions are difficult to distinguish between primary or metastatic given similarity of imaging features.5 Imaging features suggestive of Tumefactive MS include incomplete ring enhancement, absence of mass effect and absence of cortical involvement.6 7 Kim describes that CT hypoattenuation of magnetic resonance enhancing lesions was found to be highly specific for distinguishing Tumefactive MS lesions from CNS cancer pathology.6 It has been shown that SPECT using I-IMP is useful for diagnosing CNS malignancy.3 This is because there would be increased uptake in comparison to the MS lesions - implying increased metabolic activity.3 However this study has its limitations in diagnosis. In a few isolated cases I-IMP was found in greater quantities in MS tumor-like lesions.3
The imaging studies for this patient established a concern for metastasis, infection or primary malignancy. Extensive cancer workup was completed as previously discussed. Since all tumor markers were negative a baseline PET scan was considered however, was not done secondary to insurance denial. Due to the asymptomatic presentation of her disease, a primary differential diagnosis of brain metastasis and anticoagulation therapy for PE, a CSF analysis was not considered until much later. We were able to use surveillance MRI and CSF analysis to see some resolution of these lesions and confirm the diagnosis. Brain biopsy was never warranted but in unique symptomatic cases it may have been.6
The cornerstone of diagnosing MS is the demonstration of lesions in both time and space - termed the McDonald Criteria.8 The revised criteria allow a diagnosis of MS, “possible MS” or “not MS”.8 This is what made the diagnosis of our patient difficult, as no clinical symptoms or attacks were evident. It was demonstrated that over the course of three months the lesions seen on MRI evolved. From the size of 1.5 cm, 1.0 cm and 1.0 cm they became 1.3 cm, <1.0 cm and <1.0 cm respectively (Figure 2, Figure 4). This was likely the effects of steroids that the patient was on due to her vasogenic oedema. Here an evolution in time and space is demonstrated which excluded brain metastasis and infection. This brings into discussion the diagnostic value of surveillance MRI, which in our case was helpful and appropriate as the patient did not have clinical symptoms.
Conclusion
The diagnosis of Tumefactive MS can be extremely difficult and time consuming. As seen in our case, it can mimic other conditions. Our patient was able to be diagnosed with MRI surveillance and CSF analysis. The definitive diagnostic test for MS is a brain biopsy but this is not preferred due to the invasiveness of the procedure. With the advent of newer diagnostic tests such as SPECT, MR Spectroscopy, surveillance MRI and CSF analysis, diagnosis can be attained and treated presumptively.
A clearer understanding of the aetio-pathogenesis of schizophrenia would ultimately lead to effective treatment strategies and provide the impetus for elucidation. The autoimmune hypothesis promulgates that it is the auto-antibodies that are responsible for schizophrenia and, according to the viral hypothesis, it may be the body’s abnormal response to a slow viral infection or the undefeated viral antigens causing the schizophrenia pathology. The autoimmune and viral hypotheses are interlinked, as autoimmune disorders can be triggered by microbial infection. Viral aetiology is less convincing than the autoimmune model, but from a treatment perspective, the former is more promising than the latter. To gain a detailed understanding of aetiological models of a subset of schizophrenia, herein the author has reported on a review of the literature relating to the immunity- and viral-based aetiological models of schizophrenia. Genetic vulnerability has been highlighted in the schizophrenia literature alongside environmental factors. The veracity and contestability of the immunity- and viral-based aetiological hypothesis of schizophrenia merits further investigation.
Schizophrenic Syndromes
A prerequisite for incorporating autoimmune and viral aetiology into a scientific discussion would be acceptance of the heterogeneous hypothesis of schizophrenias; they may be a cluster of entities with different aetiologies and the end-stage of different disease processes. 1 Autoimmune or viral aetiology may account for one subgroup.
Schizophrenia has diverse signs and symptoms, and a long history of controversy. Nosologists designate it as polythetic, whereas most other mental illnesses are monothetic, seemingly affecting only one brain system. 2 In the second half of the twentieth century, the psychosocial model gave way to evidence that it is a brain disorder. Schizophrenia has a long history of controversies and there has been much contention over the aetiology, psychopathology, nomenclature, and diagnostic criteria. Schizophrenia is currently seen as a neurodevelopmental encephalopathy, in which the cognitive deficits are produced due to the errors during the normal development of the brain 3 or a neuro-degenerative disorder and the cognitive deficits are derived from a degenerative process that goes on unalterably. Modern neuroimaging techniques and an intensification of studies of necropsy tissue have been responsible for this shift. Researchers seem to agree that a neurodevelopmental or degenerative assault precedes the symptoms by several decades.
The aetiology of the cognitive deficits is unidentified and several potential factors, genetic and epigenetic, are envisaged. Environmental factors—including infectious agents and disturbance in utero through malnutrition—account for a few cases. Autoimmunity and viral theories would fit in with the neuro-developmental and neurodegenerative hypotheses. Proponents of viral aetiology view viruses as acting alongside susceptible genes to initiate a trajectory that manifests as psychotic symptoms.
Lessons from Autoimmunity
Disorders of an autoimmune nature are known to occur with increasing frequency in patients with another autoimmune disease. This is somewhat like the coexistence of multiple psychosomatic disorders in a person; as per Halliday’s psychosomatic formula, association of other psychosomatic afflictions justifies the diagnosis of a new psychosomatic condition. 4 It is well recognised that the central nervous system (CNS) may be directly affected by autoimmune processes, as in the case of multiple sclerosis (MS) and autoimmune limbic encephalitis. A physical autoimmune disease, such as systemic lupus erythematosus (SLE) and antiphospholipid syndrome are also associated with psychiatric morbidity. Paediatric autoimmune neuropsychiatry disorder is a post-infection (group A Beta-haemolytic streptococcal infection) autoimmune disorder characterised by abrupt onset of obsessive compulsive disorder (OCD) and Tourette’s syndrome, brought about by molecular mimicry. 5 Nicholson et al observed that 20% of OCD patients were positive for anti-basal antibodies, considered to be part of a post-streptococcal autoimmune reaction. 6
Autoimmunity is a misdirected response occurring when the immune system attacks the body; it is the loss of tolerance to self-antigens. Immunological tolerance to one’s own tissue is probably normally acquired during foetal life, helping to prevent the occurrence of the autoimmune process (see Table1). Some clones of cells that can produce auto-antibodies (forbidden clones) are thought to be produced throughout life, and are suppressed by large amounts of self-antigens or antigen-specific T cells. Auto-antibodies are produced for a wide variety of antigens; some are organ-specific and others are non-organ-specific. Some microorganisms or drugs may trigger changes in individuals who are genetically vulnerable to autoimmunity.
Table 1- Mechanisms preventing and causing autoimmunity
Tolerance to self molecules a. Clonal deletion-removing any lymphocytes that might react to self molecules b. Clonal anergy-decreasing the responsiveness of lymphocytes that recognise self-molecules. c. Receptor editing-rearrangement of B-cell receptors. d. Reduction or inhibition of molecules or antigens that may cause self recognition. Failure of self tolerance a. Release of isolated auto antigens-tissue trauma or infection may cause breakdown of anatomic barriers and may expose the hidden antigens for recognition of T cells that were not deleted during development. b. Structural alterations in self peptides- Once structurally altered by a trigger such as infection , the self-peptides become more antigenic and are subsequently recognised by the undetected T-cells evoking immune response. c. Molecular mimicry-based on a structural similarity between a pathogen or metabolite and self structures, evoking an immune response against the foreign particles but also an autoimmune response against the self molecules they resemble. d. Polyclonal activation-Infectious agents activate our immune system, B cells and T cells are stimulated resulting in abnormal production of immunoglobulin specific for self molecules. e. Genetic predisposition
A human disease may be considered of autoimmune origin on the basis of knowledge from molecular biology and hybridoma technology, 7 along with the Witebsky postulations. It is established by the presence of auto-antibodies and T cells that react with host antigens. Approximately 25% of patients with an autoimmune disease (AD) tend to build up additional auto-antibodies. Strausburg et al (1996) explained several hypotheses for the virally-triggered autoimmune mechanism (see Table 2). 8 Allergy is the consequence of a strong response to a harmless substance, but ADs are caused when the destructive potential of the immune system is misdirected to oneself. ADs share common effect or mechanisms with hypersensitivity reactions and can be classified into three main types corresponding to the type ii, type iii, and type IV categories of hypersensitivity reactions (see Table 3)
Table 2 - Virally triggered autoimmune mechanisms
a. Molecular mimicry -a protein or polysaccharide on the virus may be structurally homologous to a host molecule and the immune system being unable to differentiate between the two, may then cross react with host cells and tissues expressing this molecule. b. The virus may cause release into the circulation of auto antigens that are normally hidden from the immune system. c. The virus might pick up host proteins from the cell membranes that become immunogenetic since they are present on the virus particle. d. The virus in the process of replication may structurally change the host proteins that in turn become recognized as foreign to the immune system.
Table 3 - Classification of Autoimmune disorders
Type i-no autoimmune diseases are caused by lgE, the source of type i hypersensitivity reactions. Type ii-caused by antibodies directed against components of cell surfaces or the extracellular matrix Type iii-caused by soluble immune complexes deposited in tissues Type iv- caused by effector T cells.
Shared Aetiology
ADs are characterised by shared threads in terms of their propensity to co-exist in a patient or direct relatives. Two major autoimmune clusters have been recognised via, thyrogastric—mostly organ-specific—diseases and lupus-associated—mainly multi systems—diseases. 9 Some ADs are distributed within either cluster and there are also overlaps within each cluster. These patterns of concurrence depend predominantly on genetic determinants.
Poly-autoimmunity is the term proposed for the association of multiple autoimmune disorders in a single patient and such co-occurrences indicate a common origin of the disease. 10 Adriana et al, by grouping diverse ADs in the same patient, demonstrated that they are true associations as part of autoimmune tautology rather than chance findings.
Co-Occurrence of ADs
Theories for autoimmune aspects of schizophrenia raise the concept of early infection by microorganisms with antigens so analogous to CNS tissue that resulting antibodies act against the brain.Some data suggest that an autoimmune process precedes schizophrenia, non-affective psychosis, and bipolar disorder, 11 but do not establish whether this is affected by viral attack, as viral footprints may be hard to detect, especially in the target organ, once the autoimmune process has begun. Psychosis is reported in 25% of SLE cases.
A Danish study revealed that schizophrenia is associated with a large range of ADs. 12 The researchers found that a history of any AD in the patient is allied with a 45% increase in the incidence of schizophrenia. Specifically, nine ADs have a higher prevalence rate among patients and 12 ADs have a higher prevalence rate among their parents than among comparison groups. In comparison with the control group, Thyrotoxicosis, Celiac disease, Acquired haemolytic anaemia, interstitial cystitis, and Sjogren’s syndrome had a higher prevalence rate among schizophrenia sufferers and their family members.
Three of the Ads—namely, celiac disease, thyrotoxicosis, and acquired haemolytic anaemia—have been previously associated with schizophrenia. Celiac disease involves an immune reaction to wheat gluten. This could be due to increased permeability of the intestine, raising the level of antigen exposure, resulting in increased risk of an autoimmune response to brain components or it may be that gluten proteins are broken down into psychoactive peptides. Eaton et al opined that the association of schizophrenia and ADs could be due to common genetic causes, perhaps related to the HLA or other genes, and some cases of schizophrenia may be consequential to the production of autoantibodies that disrupt the brain function.
Researchers for a Taiwan study identified a greater variety of ADs in schizophrenic patients than anticipated and recommended further research. 13 Chen et al. found that 15 ADs are significantly associated with the schizophrenia group. Their studies also confirmed an earlier observation of a negative relationship between schizophrenia and rheumatoid arthritis (RA). It has been observed in a small sample study that mothers of schizophrenia patients have a lower risk for RA.14
Rheumatoid Connection
The negative correlation between schizophrenia and RA is puzzling. 15 Such dissociation was interpreted as the effect of antipsychotic medication. Similarly, the metabolic changes associated with one disease may inhibit another.16 Genes predisposing a person to have one disorder may have a protective influence against another and, in that way, the negative rheumatoid connection with schizophrenia is consistent with an autoimmune model.
RA has a genetic predisposition partly mediated by major histocompatibility complex (MHC) alleles and triggered by infection. Similarly, schizophrenia has genetic and environmental associations and has been cautiously connected with MHC genes other than those perhaps involved in RA. In addition to gene products accountable for antigen presentation, the MHC gene complex holds a multitude of genes-controlling aspects of immune response. Hypothetically, depending on the set of genes an individual has inherited at the MHC complex, a viral assault will lead the immune system to an immune cascade toward the development of RA, or along a genetically-predetermined path with a network of cytokines and immune mediators and directed against CNS components, resulting in schizophrenia. 17
The negative rheumatoid connection may be attributable to two mutually-exclusive alleles of the same gene. Such associations may lead to novel treatment strategies; sickle cell anaemia patients are thought to be less affected by malaria. Of note, the combined research of Karolinska Institute in Sweden and John Hopkins’s University School of Medicine in the United States have recently discovered the genes and the specific deoxyribonucleic acid (DNA) sequences that regulate them plot together to the progress of RA; rheumatology may be inching close to an early detection method and effective treatments. Such a development could hopefully happen in the schizophrenia research.
Commonalities
Even though ADs superficially seem different, the vast majority of them share several similarities. Like ADs, schizophrenia, as such, is neither infectious nor congenital. Schizophrenia and ADs have well-established genetic propensities, and a combination of genes, rather than a single gene, is thought to be responsible for their manifestations. Both schizophrenia and ADs can be triggered by environmental toxins and they have a remitting and relapsing course. Worsening of symptoms is observed when patients are exposed to stress and both conditions have a peak increase in late adolescence or early adulthood. These similarities argue in favour of an autoimmune aetiological model of schizophrenia. 18
Apparently, there is an interesting epidemiological dissimilarity between ADs and schizophrenia. The incidence of ADs is on the increase in developed countries, whereas schizophrenia has a consistent incidence of 1% globally. According to the hygiene hypothesis of ADs, the widespread practice of hygiene, vaccination, and antibiotic therapy in rich countries have disabled children’s immune systems to deal with proper infections and are more geared to charge with one’s own tissues in highly-destructive ways. 19 The incidence between the sexes was thought to be almost similar in the case of schizophrenia, but a recent study shows that for every three males with schizophrenia, there are two females with the disease. 20 ADs are slightly higher among the female population.
Immune Modulation of Clozapine
Antipsychotics may have an immunosuppressant effect; plasma levels of IL-6, soluble IL-6R and transferrin-receptor (TfR) were significantly lower after antipsychotic drug treatment. Activation of cell-mediated immunity may occur in schizophrenia; neuroleptic agents may modulate this through suppression of IL-6 or IL-6R-related mechanisms. 21 The antipsychotic effect may involve a counter-effect on the brain-mediated immune system.
Clozapine, the gold standard for refractory schizophrenia, is a dibenzodiazepine and lowers D2 receptor occupancy and is also a 5-hydroxytryptamine antagonist. Studies indicate that among the atypical antipsychotics, clozapine seems to have an immunosuppressant effect along with neuro-modulatory effect. It has been suggested that clozapine may diminish antibody synthesis in hematopoietic cells and also argued that a possible immunosuppressive action may contribute to its superior antipsychotic efficacy. 22 The long-term immunosuppressive effects of antipsychotics may inhibit putative autoimmune responses against neurological sites and could, thus, act synergistically with the direct antagonistic action on brain receptors for the evident improvement of psychotic symptoms. 23 It is also conjectured that the increase of soluble IL-2 receptor levels in Clozapine-treated patients indicates an immunosuppressant mechanism. 24
Haloperidol may also be a neuro-immune-modulating drug. A study of in-vitro effects of clozapine and haloperidol on cytokine production by human whole blood suggested that both drugs, at concentrations within their therapeutic range, may exert immunosuppressive effects through an enhanced production of IL-1 receptor antagonists. 25
It is well recognised that unlike other antipsychotics, clozapine works better over time, as immune modulation may take longer than neuro modulation. In addition to the neuro modulation, antipsychotics may be working on the principles of immune modulation, as well. If a derivative of clozapine, without its haematological and metabolic side effects is discovered, such a drug would become the first line of choice among the antipsychotics, and that could be a significant event in schizophrenia research. The immunosuppressant effects of clozapine seem to have public health awareness that patients on clozapine are advised to have the winter flue jab. Elderly patients on antipsychotic medications are more prone to get pulmonary infections, indicating that such drugs have a delicate immunosuppressant property.
Autoimmune-Neuropsychiatric Disorder?
If schizophrenia is an AD, a higher rate of other ADs may be expected among schizophrenics. Most studies confirm that it is tied to irregularities affecting multiple levels of the immune axis.There are multiple interlinked causative factors in the aetiology of schizophrenia. There are suggestions that the neuro-behavioural changes follow an abnormal response to microbial invasion, but that does not necessarily lead to an autoimmune process. The literature deciphering the role of viruses in neurotransmitter abnormalities linking neurodevelopment assaults and the neuropsychological manifestations of schizophrenia is unhelpful. For those who adhere to the autoimmune model of schizophrenia, the simplest suggestion would be that the pathogenesis of the subset of schizophrenia studied may be caused by antibodies in the plasma and CSF that react with brain proteins, resulting in a neuro-autoimmune process.
Lessons from Viral Infections
The concept that certain psychiatric disorders are the neuro-behavioural sequel of the body’s immune response to viral infections was prevalent in the early part of the 20th century. That was an outcome of research conducted into rabies in the late 1880s, which revealed the affinity of viruses for the nervous system. Research into tertiary syphilis also provided evidence of an infectious aetiology for specific psychiatric disorders. Investigation of the encephalitis lethargica pandemic (1919 - 1928) contributed to recognition of viral causation on account of similarities apparent between the psychotic symptoms associated with encephalitis lethargica and the clinical presentation of schizophrenia. 26
Post-influenza depression, depression following mononucleosis, and hallucination associated with herpes encephalitis are well recognised. Menninger, who studied post-influenza psychosis, promulgated the first acceptable viral hypothesis for schizophrenia. 27 In the mid-twentieth century, psychodynamic studies began to encompass the origins of schizophrenia and viral aetiology lost its novelty. Dementias associated with Acquired Immune Deficiency Syndrome (AIDS) have reawakened interest in the correlation between virology and psychiatric disorders, and different authors have revisited these hypotheses in the last three decades. 28-36
The immune response to influenza and other viruses involves cell-mediated immunity and cytokine activity, which tend to turn tryptophan into kynurenic instead of serotonin. The outcome of this deviation is mood disturbance. It is the body’s immune response that blocks the conversion of tryptophan into serotonin, thereby resulting in post-influenza depression. It is arguable that there may be other psychiatric disorders consequential to a slow immune response of the body to viral infections. The possibility of viral oncogenesis was originally ridiculed, but now there is some evidence to support the view that viruses are responsible, at some stage, for approximately 20% of human malignant diseases. 31
In theory, a virus could induce schizophrenic symptoms or depression by stimulating antibodies that cross-react with brain tissue, without necessarily gaining entry into the brain. At different developmental stages, the immune response may become less efficient and viral agents may become potentiated, leading to neuropsychiatric conditions. The supposedly inflammation-mediated brain diseases occur at different stages—for instance, schizophrenia in late adolescence or early adulthood, and Alzheimer’s typically at an advanced age. It is well established that the human immunodeficiency virus (HIV) may lead to a form of AIDS dementia, and other common viruses that infiltrate the neurons may cause other types of dementia. HIV/AIDS and Borna Disease Virus (BDV) in animals help to bring the infection-based model of schizophrenia to the realm of scientific imagination
Viruses can influence the human genome. After becoming effective, viral sequences are integrated into the genome of brain cells. These sequences are not thought to be inheritable, but may cause mutations that interfere with brain functions and contribute to the development of psychiatric disorders. 37 It may be arguable that the combination of the body’s sustained immune response and the constant release of antigens of a hypothetical slow virus (schizovirus) may account for the neuro-behavioural alterations. In the following paragraphs, the author discusses how viral pathogens and other potential contributors could interact and lead to schizophrenic psychopathology.
Immune Responses
Neuro-developmental theories of schizophrenia fit the hypothesis that viral insult occurs early in sufferers, not proximally to a psychotic episode. The interaction between host and virus is affected by coordinated activity of the immune system and the brain. There is evidence that schizophrenia is accompanied by mutations in the immune system. Innate immunity is the first defence against microbes; infection results in invasion by live microorganisms and their toxic products, stimulating an inflammatory response. Neuronal functions are disrupted by pathogens and the brain’s inflammatory responses. Non-cytolytic viruses may affect neurones without causing cyto-architectural alteration, but disturbing neurotransmitter production and weakening hormones involved in neurodevelopment. 38 In schizophrenia, immune infiltration is absent, as are vital inclusion bodies and minimal gliosis. There is subtle disruption of neuronal function and brain development, but no significant loss of neuronal cells. Thus, the schizophrenia subset may have a viral aetiological origin, bringing about anomalous, specific immune responses, an autoimmune basis, or both. What triggers the autoimmune process is uncertain, but microbial triggers are a strong possibility.
Immune dysfunctions including lymphocytic abnormalities, protein abnormalities, auto-antibodies, and cytokines have been suggested in seriously-ill patients 39. One study showed significantly higher plasma levels of interleukin-6 (IL-6) in schizophrenics, and soluble IL-6R and soluble IL-2R were significantly high in mania. 40 A few early investigators claimed to have microscopically visualised virus-like particles in the cerebrospinal fluid (CSF) of patients or in chicken embryos inoculated with CSF. Studies of viral antibodies, viral antigens, viral genomes, the cytopathic effect of specimens on cell cultures, and animal transmission experiments are other avenues for exploring the viral infection hypothesis.
The subset of schizophrenics in question may have a highly-sensitive surveillance system, but a less-discerning immune mechanism than the general population. It could be the over-reaction of the immune system to the microbial adversary that may eventually lead to the schizophrenia pathogenesis. The fault may lie in the surveillance system, as well as in the body’s anomalous response to the microbial invasion. 17 In general, innate and acquired immune mechanisms interact and cooperate, but any derangement can lead to deviant immune responses that may result in neuropsychiatric abnormalities.
From an evolutionary perspective, innate immunity is less evolved and the mammalian brain is endowed with a complex immune response system, implying that the neurobehavioral aberrations of schizophrenia could be more linked with deviant and vigorous specific immune responses. 17 It is possible that the proposed subset of schizophrenia may have either an autoimmune basis or a viral aetiological origin, bringing about anomalous, specific immune responses, or both. It has been argued that a gene family involved in the specific immune system and autoimmunity is involved in schizophrenia. 41 The genome-wide association studies (GWAS) have been disappointing in schizophrenia, whereas the major histocompatability complex (MHC) region continues to be the best replicated.
Epidemiological Findings
Epidemiological studies offer useful supporting evidence for viral aetiology (see Table 4). Epidemiological studies characterised by certain broad patterns of incidence and distribution of schizophrenia offer evidence to suspend the scepticism of the viral causal hypothesis. In a study of adults at risk of exposure in utero to the 1957 influenza A2 epidemic in Helsinki, those at risk during the second trimester had significantly more hospitalisations for schizophrenia than those potentially exposed during the other trimesters or immediate years. 42 Researchers for nine subsequent epidemiological studies scrutinised the risk of schizophrenia after possible intrauterine exposure to influenza in Europe and the USA; these identified a small majority claiming to find an association.43 Falsifying the influenza link with the origin of schizophrenia does not altogether make the viral aetiology null and void. There could still be an unknown virus (schizo-virus) as the causative agent. The Hepatitis C virus came to medical attention only 15 years ago. At least these epidemiological studies illustrated that viruses can help set the stage for schizophrenia as a long-term sequel
Table 4 - Suggested Evidences for Viral aetiology
A. Direct evidences: 1.Neuropathology 2.Transmission to laboratory animal 3.Detection of viral genome 4. Sero-epidemiological studies-Detection of Antigen or antibody B. Indirect evidences: 1.Seasonality of schizophrenic births 2.Prevalence studies 3. Immune alterations 4.Antiviral effects of antipsychotic drugs 5.Possible immunosuppressant effect of antipsychotic drugs 6.Studies of identical twins 7. Migration and high risk 8. Gender differences-males are younger at disease onset and have a more severe course.
A worldwide average of 1% prevalence of “core schizophrenia” is generally accepted, 44 even though such a concept of universal distribution and gender equality has opposition. 45 However, there is evidence to assume that there may not be gross variations in this global prevalence. Cross-culturally stable rates, despite decreased fecundity in affected individuals, support an external biological aetiology. These point toward biologically-interlinked and multifactorial causation including an evolutionary genetic factor, as a single biological factor would be insufficient. The preservation of susceptibility genes for schizophrenia in the human gene pool is an evolutionary enigma; gene carriers or first-degree relatives may have some compensatory evolutionary advantage. 46 In a multifactorial aetiological model of schizophrenia, infectious theories are contestable. 17
Such a consistent prevalence, if true, could also be argued in favour of a biologically-inter-linked and multi-factorial causation of schizophrenia, as it is obvious that a single biological factor would be insufficient to maintain a delicate and consistent global prevalence of a disease. Many viruses are relatively constantly distributed, while genetic diseases present distinct geographical clustering due to inbreeding. One may hypothesise that where viral loading is high, genetic input may be less and vice versa. The consistent global incidence points toward universal microbes, a readily-available environmental factor, or, more specifically, a “schizovirus.” The interaction of vulnerable host genes with a virus could yield epidemiology like that of schizophrenia.
Birth patterns rank highly among epidemiological observations in schizophrenia. 47 Many more schizophrenics are born in winter and spring than in summer and fall. 48 Infectious aetiology is a plausible explanation, as many viruses show a surge in the same months and viral aetiology is a more convincing explanation of the consistency in question. While gene coding for particular proteins is inherited, environmental and developmental factors are undoubtedly implicated in modulating genes’ expression.
Exposure to prenatal infections and other obstetric complications are neuro-developmental assaults that increase vulnerability to schizophrenia. 49-52 In obstetrics, infection in the mother generates antibodies transmitted to the foetus, producing auto-antibodies that upset neural development and increase the schizophrenia risk. 53
Schizo-Virus or any Microbe?
It is not certain whether it is body’s abnormal response to any virus and other microbes or a specific unknown virus that results in “schizophrenic reactions.” It is even unclear that the unbeatable antigens of this hypothetical virus alone are capable of inducing the neuro-behavioural changes associated with schizophrenia. The hepatitis C virus came to medical attention only 15 years ago. The rotavirus was isolated in 1973 and the HIV virus was isolated in 1983. Non-detection of a pathogen does not exclude its role in the pathogenesis. If a specific virus is responsible for schizophrenia, it should have been with human society for a very long time, as the illness has been reported from the beginning of recorded human history. Some people may have a genetic vulnerability to the hypothetical schizovirus; inheritability would lie in contracting the specific virus. Poliomyelitis has a concordant rate of 36% among monozygotic twins; the rest are attributed to environmental factors. The majority of children exposed to the polio virus may not develop poliomyelitis and a genetic propensity may be required for the viral manifestation. It is even reported that 10% of the world population rarely catch influenza, in spite of its yearly mutation.
Cardiac disease due to endocarditis (caused by an autoimmune process affecting many parts of the body), a sequel to acute rheumatic fever, is an analogy to demonstrate how, theoretically, a microbial infection may lead to impaired neurodevelopment and psychiatric disorders in a different scenario. Endocarditis is triggered by a reaction to streptococcal bacteria, not a bacterial infection. It may begin a chronic process, leading to valvular cardiac disease. Generally, rheumatic heart diseases are diagnosed 10 - 20 years following rheumatic fever. Similarly, schizophrenia could be an autoimmune complication of a subtle microbial infection; finding and countering the antigenic triggers of ADs may lead to an effective cure.
HIV/AIDS
Patients with HIV are at risk for developing psychiatric symptoms and disorders similar to those seen in the general population, as well as those that are direct effects of HIV. HIV is a neurotropic and lymphotropic virus that causes immune suppression and allows the entry of opportunistic pathogens with an affinity for the CNS. There is some evidence that HIV may trigger a psychotic episode and contribute to first-onset schizophrenia. 54 Serious CNS complications occur late in the course of HIV infection, when the immunity function has diminished considerably. The viral load is closely associated with the degree of cognitive impairment. HIV-associated dementia (AIDS dementia complex) is defined as acquired cognitive abnormality in two or more domains and is associated with functional impairment and acquired motor or behavioural abnormality in the absence of other aetiology. It is estimated that 30% to 60% of patients experience some CNS complications during the course of their illness and 90% reveal neuropathological abnormalities at autopsy.
Pearce argued that HIV-related encephalitis could engender a scenario for a viral aetiology of schizophrenia. 17 HIV produces symptoms after being latent for several years. HIV was not identified as the aetiological agent of AIDS until the conditions for viral replication in lymphoid cell lines were identified. Prior to the evolution of PCR serology techniques, it was debatable whether the virus was in circulation at all. This indicates that the absence of a demonstrable virus does not mean the absence of a subtle virus-induced disease process. No virus, as such, is currently detectable in the schizophrenia disease process. Even in the absence of opportunistic infections, HIV infection of the brain causes severe neuro-behavioural syndromes, such as AIDS dementia, without infecting neurons, but by complex interaction with host molecules and non-neuronal cells. All these suggest that a rare or unknown infectious agent is involved; it would not be identified unless it was specifically tested for.
The finding that the neurophysiological and psychological stress of HIV infection can aggravate an underlying psychotic illness implies that viruses, without being a direct causative agent in psychotic episodes, can unmask pre-existing psychiatric vulnerabilities, acting on the brain physiology through unknown pathways. A curious aspect of HIV-related psychosis is that it responds to anti-psychotic treatment and to anti-retroviral drugs. Several anti-psychotic drugs have been shown to have antiviral properties, both in vitro 55 and in vivo. 56 The deduction is that a virus could initiate events resulting in psychosis, and anti-psychotic drugs can interrupt that sequence. All these features of HIV infections are consistent with the idea that a virus can cause neurobehavioral abnormalities after several years.
Borna Disease Virus
It has been recognised that Borna disease virus (BDV) could cause neuropsychiatric complications including neurological, behavioural, and mood alterations in animals. 57 A ribonucleic acid (RNA) virus from the family Bornaviridae, it is a neurotropic virus with an affinity to a variety of hosts, particularly hoofed animals, and can cause persistent infection of the CNS. Such an infection may be either latent or chronic and slow, but BDV presents with the latent type, characterised by a lack of viral particles. It may resemble the alleged pathogens in non-affective psychosis. The severity of clinical symptoms depends on the immune response of the host.BDV can directly influence the CNS through the binding of viral proteins with neurotransmitter receptors and indirectly through immune response and inflammatory reactions.
Depending on the host’s age and the integrity of the immune response, an infection may be asymptomatic or involve a broad spectrum of behavioural disorders. The severity of clinical symptoms depends on the immune response of the host.58, 59 Unusual features of BDV biology include nuclear localisation of replication and transcription, varied strategies for the regulation of gene expression, and interaction with signalling pathways, resulting in subtle neuropathology.60 BDV can directly influence the CNS through the binding of viral proteins with neurotransmitter receptors and indirectly through immune response and inflammatory reactions. The issue of human BVD infection has been recently questioned by American researchers who reported an absence of association of psychiatric illness with antibodies to BDV or with nucleic acids in serially-collected serum and white blood cell samples from 396 participants. 61 However, BDV in animals helps to bring the infection-based model of schizophrenia to the realm of the scientific imagination.
Neurotransmitters
It is an overstatement to say that schizophrenia is a neurotransmitter disease, although it is well established that it incorporates a derangement of dopamine activity. Some viruses have been shown to alter dopamine metabolism. 62 The literature deciphering the role of viruses in bringing about neurotransmitter abnormalities linking neurodevelopment assaults and the neuropsychological manifestations of schizophrenia is unhelpful. 63 It has been reported that in rodents, BDV could crash neurotransmitter systems, including dopamine, neuropeptides, and glutamate. 64 How viruses alter neurotransmitters is a central issue. Communication between the immune system and the brain is crucial to defend against viral infection; this is mediated through neurotransmitters. Viruses are bound to tamper with the intrinsic communication system as part of their cellular offensive. Some viruses have been shown to alter dopamine metabolism. 65
Genetics
The undisputed genetic factor in schizophrenia may be posited to discount the viral hypothesis. However, genetic factors do not exclude environmental contributions. Monozygotic twins have a concordance rate of only 48%. Brief reactive psychosis due to acute sequels to viral infection, though regarded as unrelated to schizophrenia, may still be schizophrenic reactions and they do not progress to schizophrenia only because the sufferers are not genetically predisposed to schizophrenia. Genetic predilection may be attributable to genes that determine idiosyncratic differences in immune responsiveness to common viral pathogens.
Susceptibility and immune response to infectious agents are known to be subject of genetic control and may involve multiple interacting susceptibility genes. 66 The genetic component of schizophrenia may engross multiple interacting susceptibility genes. These together or singularly may moderate the virus, and the virus and gene product may act at different points. Many cases would have a genetic foundation and it may be extremely rare to develop schizophrenia independently of a genetic anomaly. A small subset of patients may have a purely genetic form. Research should also be directed at identifying risk genes and why they assert themselves and cause the disease. Any future research which sheds more light on some people are affected more readily than others would bring researchers closer to more effective treatments and early intervention (see Table 5).
Table 5 - Future Directions
1. Critical research studies should target in establishing the viral and autoimmune aetiology of a subset of schizophrenia as the illness may be due to both factors. Detection criteria/ tests are vital in isolating this subset from the rest of schizophrenia syndromes 2. Robust epidemiological studies to be conducted to find putative infectious agents and possible models of transmission. 3. Developing new methods for detection of viral agents, directed at the analysis of previously identified pathogens and identification of novel viruses. Vigorous studies with PCR and other sensitive methods for nucleic acid detection to be carried out for the detection of viral nucleic acids in the body fluids of schizophrenia sufferers. 4.To find a method to turn off autoimmune attacks from the body or selectively disable the immune response 5. Identify risk genes and to find the specific DNA molecules and their tagging patterns vital for the progress of the illness. 6. To develop drugs to target specific genes which would mean they would be far more effective and have fewer side effects. 7. Finding psycho-physiological parameters for early detection to minimise the damage. 8. In the event of future discovery of effective antiviral agents, the subset of schizophrenia in question could take advantage of the clinical benefits of such discoveries. 9. Viral aetiology, if proven true, could lead to finding a vaccine against the disease. 10. Selective immune-suppressants could be a future addition into the psychiatric armamentarium. 11. A derivative of cloazapine without its haematological and metabolic side effects would be highly promising.
Summary
There are multiple interlinked causative factors in schizophrenia and viral infection may be only a trigger. Viral infections may be the cause of vigorous immune responses or triggering an autoimmune process that lead to neuro-behavioural aberrations and a subset of schizophrenia would emerge as viro-immuno-neuropsychiatric disorder or autoimmuno-neuro-psychiatric disorder. If such a subset of schizophrenia contains an autoimmune component, either triggered by infectious agents or due to unidentified intrinsic factors, the disease process would be determined by genetic vulnerability. There is not sufficient evidence established to identify viruses as being implicated in the aetiology of schizophrenia, but researchers have reason to anticipate further laboratory studies, as newer, more sensitive laboratory technologies are evolving. A viral or autoimmune model of schizophrenia may illuminate its pathogenesis, but not necessarily the diversity of psychiatric symptomatology. In the last few decades, schizophrenia research has been focussed on neurotransmitter derangements and neuro-developmental anomalies. The cause of a tsunami is not in the sea water, but due to the tectonic shifts under the sea bed; the aetiology of schizophrenia may be similarly due to immune alterations.
Pellagra psychosis due to niacin deficiency was hidden under the schizophrenia umbrella. 67 There may be other psychotic disorders grouped under schizophrenia, and they may have a pure biological aetiology—chemical or infectious—but with genetic vulnerability. No one can be sure whether it is the toxic chemical of the pathogens or the immune response of the host, or both, that may lead to the psychopathology. Searching for this hypothetical virus is a challenging task, but if researchers found it, the benefits would be enormous. A viral aetiology of certain types of schizophrenia, if demonstrable, could affect radical changes in treatment and management. In fact, the hypothesis of viral aetiology is more promising than any other biological hypothesis, as it gives a message of potential drug cure. In this contest, it is interesting to note that the antigenic similarity between components of the streptococcus and cardiac tissue resulted in rheumatic heart diseases, but with the advent of penicillin, this disease has virtually disappeared. Only time will determine the validity and therapeutic prospects of the viral and autoimmune aetiology of schizophrenia.
Davison opined that as evidence accumulates about the autoimmune basis of at least a subset of psychiatric disorders, clinicians should keep abreast of immune-neuropsychiatric research. 68 Psychiatry must constantly expand to meet the growing needs with the emergence of novel ideas in other medical specialities and it is high time to introduce a new terminology—“Psycho-immunovirology”—to study the viral aetiological mechanisms involved in psychiatric disorders like schizophrenia. Neuro-virology and psycho-immuno-virology could develop as an interdisciplinary field which represents a melding of virology, psychiatry, the neurosciences and immunology.
Methamphetamine and related compounds are the most widely abused drugs in the world after cannabis 1. Methamphetamine is a synthetic stimulant which acts both on central and peripheral nervous system. It causes the release and blocks the reuptake of dopamine, norepinephrine, epinephrine and serotonin in neuronal synapse. Methamphetamine can be smoked, snorted, injected or ingested orally. Methamphetamine is more potent, and its effects last longer than cocaine 2, 3.
Methamphetamine intoxication causes various systemic complications like sympathetic over activity, agitation, seizure, stroke, rhabdomyolysis and cardiovascular collapse. Acute cardiac complications of methamphetamine like chest pain, hypertension, arrhythmias, aortic dissection, acute coronary syndrome, cardiomyopathy, and sudden cardiac death have been reported 4, 5. Chronic methamphetamine use is associated with coronary artery disease, chronic hypertension and cardiomyopathy 6.
Here we present a case of methamphetamine overdose, which presented with cardiomyopathy and severe systolic heart failure whose cardiac function was normalized after treatment.
Case presentation
A 38-year-old male presented with shortness of breath, chest tightness and sweating which started after he used intravenous crystal meth the day before presentation. He was an active poly substance abuser and used different drugs like marijuana, alprazolam, amphetamine, cocaine, percocet (oxycodone and acetaminophen) and clonazepam regularly. He was on methadone maintenance program as well. The patient did not have any cardiac problem in the past. He had a seizure disorder but he was not on medication. He had an episode of a seizure after methamphetamine use. His review of system was otherwise unremarkable.
On presentation he was tachycardic, his pulse was 128/min and his temperature was 98 degree Fahrenheit. He had bilateral diffuse crackles on lung bases. Troponin I was high 4.23 ng/ml (reference 0.01-0.05 ng/ml) and BNP was high 657 pg/ml (reference 0-100pg/ml). His electrolytes, renal function, liver function and creatinine kinase were normal. Urine toxicology was positive for opiate, methadone, amphetamine, benzodiazepine, cocaine and cannabinoid. Electrocardiogram showed sinus tachycardia at rate 130/min and QTc was prolonged at 488ms (Figure 1).
Figure 1 - Electrocardiogram: Sinus tachycardia at 130/min with prolonged QTc
Subsequently the patient became tachypnoeic and hypoxic, was intubated, put on a mechanical ventilator, and sedated with versed, fentanyl and propofol. Arterial blood showed respiratory acidosis and hypoxia. The patient was in cardiogenic shock and dopamine drip was started and intravenous Lasix was given. A subsequent chest X-ray showed newly developed pulmonary congestion. Echocardiogram showed left ventricular dilatation with diffuse hypokinesis and depressed systolic function. The left atrium was dilated. He had moderate diastolic dysfunction, mild mitral regurgitation and tricuspid regurgitation with a pulmonary artery pressure of 38mmHg. There was global left ventricular function was reduced and ejection fraction was 25-30%. His CT head was negative for an infarct or hemorrhage. He was managed in the cardiac care unit and responded very well to treatment. He became haemodynamically stable and dopamine was discontinued; aspirin, clopidogrel and carvedilol were started. The patient gradually improved and was extubated. Cardiac catheterization showed normal coronaries and normal left ventricular function. LVEDP was 18mmHg. His repeat echocardiogram one week later showed normal left ventricular systolic and diastolic function with an ejection fraction of 70%. The patient was discharged to drug rehab after eight days of treatment.
Discussion
This patient used intravenous crystal meth after which his problem started, so the most likely culprit was methamphetamine. Although he used multiple drugs including cocaine and amphetamine, which have acute and chronic effects on the heart, his cardiac function was normal before. Different mechanisms for cardiac injury due to methamphetamine have been proposed which include catecholamine excess, coronary vasospasm and ischaemia, increase in reactive oxygen species, mitochondrial injury, changes in myocardial metabolism, and direct toxic effects 3.Methamphetamine use is known to cause acute and chronic cardiomyopathy and the reversal of cardiac failure has been documented after discontinuing the drug. In one case report, a patient with chronic methamphetamine-associated cardiomyopathy did not demonstrate late gadolinium enhancement, consistent with an absence of significant fibrosis, and had left ventricular function recovered with 6 months of medical therapy and decreased drug abuse 7. Another case of a female 42 year old methamphetamine user who had transient left ventricular dysfunction and wall motion abnormalities and an index ventriculogram showed apical ballooning consistent with Takotsubo cardiomyopathy; her left ventricular function significantly improved after 3 days of medical treatment 8. In our patient, acute cardiomyopathy resolved quickly with intensive medical management. It is not clear how long it takes for cardiomyopathy to revert to normal after discontinuing the drug, or at what stage cardiac damage is irreversible. Many patients who use methamphetamine also ingest other drugs as well. It is unclear to what extent the use of multiple drugs play synergistic role in the cardiac complications that occur. Among patients who present with cardiomyopathy and cardiogenic shock, the usage of drugs like methamphetamine and co-ingestion of other drugs should be considered. Further study is needed to recommend treatment for methamphetamine and related drugs induced cardiomyopathy.
A 79 year old lady presented to the accident and emergency department with severe abdominal pain. On admission she was hypotensive and hypothermic. Blood tests demonstrated raised inflammatory markers and white count, but were otherwise unremarkable. A CT scan revealed no abnormalities. She was treated with intravenous fluids and empirical antibiotics.
She had multiple co-morbidities, including ischaemic heart disease, hypertension and chronic kidney disease.
Figure 1 – Upper Oesophagus
Figure 2 – Distal Oesophagus
Figure 3 - Gastro-eosophageal Junction
Figure 4 – Stomach (in retroflexion)
Figure 5 - Duodenum
Three days into her admission she had a single episode of hematemesis and a gastroscopy was arranged. Endoscopic features were as per figures 1- 5. Histology taken at the time showed necrotic tissue with evidence of candidiasis. Her treatment was optimised with a two-week course of fluconazole with the dose adjusted for her renal function and parenteral nutrition, with good clinical response. She was discharged after a two week hospital admission. A repeat gastroscopy 10 weeks later showed complete resolution of endoscopic features with no evidence of perforation or stricture formation.
DISCUSSION
The images seen at endoscopy demonstrate a region of oesophageal ulceration progressing to a diffuse, circumferential, black discoloration of the distal esophageal mucosa, with an abrupt transition to normal mucosa at the gastro-esophageal junction (Figs. 1-3). These endoscopic features, in the absence of a history of ingestion of caustic substances, are diagnostic of Acute Oesophageal Necrosis (AON), also known as ‘Black Oesophagus’. Whilst histology confirming necrosis is not necessary to make the diagnosis, it is confirmatory.
AON was first described in 1990 by Goldberg et al, since which over one hundred cases have been reported in the literature1. Population studies have suggested the incidence of this condition to be between 0.08% and 0.2%, although interestingly one post-mortem series of 1000 patients failed to reveal any cases2-4. There is a male preponderance, with an incidence four times greater than that for women and a peak incidence during the sixth decade of life5, 6.
The aetiology of this condition is not entirely clear; however case reports to date suggest that this is almost exclusively observed in those who are systemically unwell, usually in the context of multi-organ dysfunction5-7. It has been postulated that necrosis most commonly occurs as a consequence of hypo-perfusion caused by a low flow state in those with underlying vascular disease. This is likely to account for the predilection for the distal third of the esophagus, which is relatively less vascular5. Individual cases have occurred in association with bacterial, viral and fungal infections, whilst malnutrition, malignancy and immune-compromise appear to be important factors3, 5, 6.
The most common indication for the gastroscopy that makes the diagnosis of AON is hematemesis and melena, accounting for over 75% of cases6. It is therefore likely that AON is significantly under reported as endoscopy is often precluded in those who are clinically unstable. Further it is not clear whether hematemesis is a universal symptom of this condition; it is conceivable that AON may go undiagnosed in those in whom this is not a feature.
Whilst AON has no specific treatment, its presence is indicative of significant systemic compromise and predicts a poor prognosis. This diagnosis should alert physicians that close monitoring and aggressive treatment is required to optimise patient outcomes. There is no clear role for the use of anti-acid therapy, however this is commonly used in management due to patient symptoms, which usually includes hematemesis. Similarly, candidiasis may occur in conjunction with AON, whilst it is not thought to be causative, treatment is considered prudent given the poor prognosis associated with this condition.
For those that recover from their acute systemic insult the prognosis appears to be good. The long-term sequale of this condition includes oesophageal stenosis due to structuring. Evaluation with a repeat gastroscopy if therefore indicated if dysphagia develops.
CONCLUSION
The clinical course of AEN is variable, with an associated mortality of 32%5. The severity of the underlying clinical condition appears to be the most important factor in determining prognosis. There is no specific treatment for AON. The current body of experience suggests aggressive management of abnormal physiology optimises outcomes5, 6. Antibiotics, antifungals and nutritional support should be considered on an individual basis.
Angiogenesis is an integral process in biological programmes of embryonic development, tissue damage and regeneration, tumour growth and progression and pathogenesis of inflammatory and autoimmune diseases. MS (multiple sclerosis) is a demyelinating disease of the CNS (central nervous system). Angiogenesis has been a consistent feature of demyelinating plaques of MS1-3. Many inducers of angiogenesis are expressed in these plaques. They are also closely associated with the animal model of MS viz. EAE (experimental autoimmune encephalomyelitis)4 (Table 1). This has led to the suggestion that inhibition of angiogenesis by suppressing these effectors or inhibiting the elements of angiogenic signalling pathways might provide a viable way to target therapy to manage MS.
[Note: Inhibitory effects of thalidomides were described by Sherbet4; D’Amato et al.6; Kenyon et al.7; Lu et al.8]
Multiple sclerosis is an autoimmune inflammatory condition and so immunomodulators have been used in treatment. It is recognised that aberrant activation of the immune system and the associated network of its regulation are important events in the pathogenesis of the disease. This is the rationale for using immunomodulatory agents in disease control. Among immunomodulators of note are Fingolimod which prevents infiltration of auto-destructive lymphocytes into the CSF, Teriflunomide which reduces lymphocyte infiltration of the CNS, axonal loss and inflammatory demyelination, and dimethyl fumarate, which modulates the immune system by many mechanisms. Furthermore, much attention has been devoted to the immunomodulatory properties of MSCs (mesenchymal stem cells) 4,5. Thalidomides are also capable of modulating the function of key element of the immune system related to the pathogenesis of MS, but this brief article is intended to emphasise the potential of thalidomide and its analogues as potent inhibitors of angiogenesis and the latent possibility of their use as a therapeutic agent in the control of MS.
Thalidomide was introduced over four decades ago to treat respiratory infections and to combat morning sickness in pregnant women. It was withdrawn when it was found to be highly teratogenic. The teratogenic effects are a result of the binding of thalidomide to cereblon, a protein found in both embryonic and adult tissues. Cereblon is required for normal morphogenesis. It is inactivated by binding to thalidomide and this leads to teratogenesis9. Thalidomide possesses immunomodulatory, anti-inflammatory, anti-angiogenesis and cell proliferation inhibitory properties and this has suggested its use in the treatment of cancer5. Analogues of thalidomide, viz. lenalidomide and pomalidomide, have been synthesised and these possess reduced toxicity and greater efficacy10, 11. Recently, many studies have elucidated the signalling pathways which thalidomides inhibit and thereby suppress cell proliferation, promote apoptosis and inhibit angiogenesis. These have led to the suggestion of combining the modulators of these signalling pathways to synergise with thalidomides to deliver the suppressor effects with enhanced efficacy and at lower concentrations thus reducing the side effects5 (Figure 1).
Most of the work on the efficacy of thalidomide and the analogues has been carried out in preclinical models. Quite understandably, in the clinical setting very little effort is seen to check whether thalidomide or the analogues provide any beneficial effects in MS or neuro-inflammation. Clinically orientated investigations so far relate mainly to multiple myeloma and some other forms of haematological malignancies but not solid tumours5. Any perceived beneficial effects are probably outweighed by the side effects. We need to expend more effort and design and develop new analogues with reduced toxicity. In this context one should emphasise that pre-clinical exploration of the potential synergy between the thalidomides and the acknowledged modulators of the signalling pathways would be worthwhile. This might enable the delivery of benefits more effectively and at lower dosages. It is needless to say that safety of drug administration is of paramount importance.
As syphilis is a notable clinical and pathological imitator, its diagnosis remains challenging. Physicians should be vigilant to suspect syphilis in cases of non-specific signs, such as lymphadenopathies, even in patients with no apparent risk for sexually transmitted infections or a history of primary syphilis.
Case Report
We report the case of a seventy-year old woman with a medical history of arterial hypertension. She had neither smoked cigarettes nor drunk alcohol and she had no significant medical family history. The patient presented with a history of swelling in the left axilla of one year duration. The swelling gradually increased in size and was painless. There was a history of occasional low-grade fever and weight loss, but no cough or night sweats.
On initial examination, the patient was thin with generalised lymphadenopathy: she had an axillary adenopathy that measured 4 cm in diameter in the right axilla and one measuring 3 cm in the left axilla. She also had two cervical lymph nodes that were less significant, and one enlarged right inguinal lymph node of about 3 cm in diameter. The existing lymph nodes were painless, mobile, mildly tender and smooth. Otherwise, breasts, limbs and other regions were essentially normal. No skin rash or suspect lesions were noticed. All her family members were well, with no contributory medical history, and none of them had similar symptoms.
A complete blood count revealed a white blood cell count of 5300/l (neutrophils 40%, eosinophils 19%, lymphocytes 30%, monocytes 10%), and a C-reactive protein of 14 mg/l. The remaining results of her full blood count, electrolytes, liver enzymes, lactate dehydrogenase and urine analysis were within normal limits. Calcium and phosphate levels were normal in both blood and urine analyses. Both human immunodeficiency virus screening and the serological tests for hepatitis B and C were negative. Mantoux test did not show any indurations. Smear and culture of the sputum were negative. Her chest x-ray and abdominal ultrasound were normal.
A CT scan of the patient’s neck and chest showed a marked anterior mediastinal mass of about 50 mm diameter with multiple calcifications. Several small lymph nodes were also noticed in the cervical and axillary areas. An axillary lymph node biopsy was performed. Histopathological examination of the biopsy specimen revealed a granulomatous lesion with epithelioid and multinucleated giant cells (Fig.1) associated with calcifications and central areas of caseous necrosis (Fig.2), which were highly suggestive of tuberculosis.
Fig 1: Epithelioid granuloma with giant cell
Fig 2: Eosinophilic granuloma with acellular caseous necrosis
According to these clinical and pathological findings, the most common granulomatous diseases are mycobacterial diseases such as tuberculosis, hence why the diagnosis of tuberculous lymphadenitis was highly suspected, and the patient was given anti-TB drugs. However, other differential diagnoses were considered, including bacterial infections like syphilis or actinomycosis, protozoal infections such as toxoplasmosis, and miscellaneous diseases such as sarcoidosis, Crohn's disease and Wegener's granulomatosis. To distinguish disease processes and make a definitive diagnosis, further investigations, such as special stains, culture methods and serologic tests, were indicated.
Additional histological stains, including Ziehl-Nielsen, were performed and returned negative, excluding the diagnosis of tuberculosis. In the meantime, the serological tests showed a positive venereal disease research laboratory test (VDRL: 1/8) and Treponema Pallidum haemagglutination assay (TPHA: 1/350). As a result, the diagnosis of secondary syphilis was confirmed and tuberculosis treatment was ceased.
The patient received intramuscular injections of 2.4 million units of benzathine penicillin every three weeks. Additional clinical and laboratory examinations were performed for both the patient and her family. She did not present with any manifestations of cardiovascular or neurological syphilis. Her husband’s VDRL and TPHA tests were negative. After a nine-month follow-up, the patient had no clinical or laboratory evidence of syphilis.
Discussion
Syphilis is predominantly a sexually-transmitted disease with both local and systemic manifestations. The causative organism is the spirochete Treponema Pallidum (TP) which was first demonstrated on the 17th of May 1905 1.
Syphilis has many non-specific signs and symptoms that may be overlooked by the physician, because in some cases it may simply be indistinguishable from other more common diseases. In fact, syphilis can share clinical manifestations with other treponemal and non-treponemal diseases, and it may be asymptomatic in some stages. Unfortunately, undiagnosed and untreated syphilis may lead to life-threatening complications such as hepatitis, stroke and neurological damage 2. Therefore its clinical diagnosis must be supported by laboratory tests.
Several older methods can be used to confirm syphilis diagnosis such as direct identification of TP by dark-field microscopy or direct fluorescent antibody tests, but such tests are not practical in a routine clinical setting and these methods can only be performed on lesion exudate or tissue 3.
As a consequence, the diagnosis in most patients is based on serological tests. Guidelines from the United States of America (USA) and Europe recommend a combination of two tests: the first one is a non treponemal (cardiolipin, reaginic) test, essentially Venereal Disease Research Laboratory (VDRL) or rapid plasma reagin (RPR); and the second is a treponemal test, essentially TP haemagglutination assay (TPHA), TP particle agglutination, or the fluorescent treponemal antibody absorption (FTA-abs) test 3,4.
In our patient, the most significant clinical finding was lymphadenopathy. This case presented diagnostic difficulties because of its clinical and histopathological resemblance to other pathological conditions. In fact, the presence of generalised lymphadenopathy and the finding of granulomatous lesions with epithelioid cells in the biopsy were highly suggestive of tuberculosis. As a matter of fact, tuberculosis tops the list of aetiological causes of granulomatous infections5. Worldwide it is considered the leading cause of contagious disease leading to approximately 1.4 million deaths per year 6. Its prevalence is still extremely high in certain populations especially in low-and middle-income countries such as Tunisia where the disease is endemic.
Tuberculosis is caused by Mycobacterium tuberculosis (M. tuberculosis) and M. bovis, an acid and alcohol fast organism 7,8. Histopathology is characterized by the presence of epitheloid granuloma with Langerhans giant cells and central caseous necrosis 7.
Lymphadenitis is the most common extra-pulmonary manifestation of tuberculosis but its diagnosis is difficult, often requiring biopsy. In such granulomatous disease, and in cases of persisting doubts, it is necessary to identify the specific etiological agent by further investigations such as special stains, culture methods and molecular techniques like polymerase chain reaction (PCR) and serological tests, as in the case of syphilis.
In the case of tuberculosis infection, demonstration of the mycobacteria can be done with Ziehl-Neelsen staining or by immunofluorescence using auramine-rhodamine. Mycobacterial culture and detection of mycobacterial DNA using PCR are also used 7,9. Since the growth of mycobacterium in culture requires a long time, additional histological stain with Ziehl-Nielsen was performed, but returned negative in the case of our patient. As a consequence, the diagnosis of tuberculosis was excluded and syphilis was considered as a definitive diagnosis.
Conclusion
Granulomatous lesions can be seen in numerous diseases. A definitive diagnosis cannot be made on the basis of the history and physical examination alone, confirmatory testing should be performed in order to identify the specific etiologic agent correctly. Diagnosis of the disease in the initial stages would be beneficial not only to allow the patients to receive early treatment, but also to prevent the spread of the disease to others.
Poetry is a way of expressing the subjective experiences that spill over the rational mind and it permits spontaneous overflow of subjective feelings. The ability to express oneself through poetry, and share that experience, is one of the unique human experiences that distinguish us from lower biological forms. The strife and struggle of modern men have made them miserable wretches on the face of this beautiful cosmos, and the technological revolution has taken the poetic sense from them; a time old coping mechanism. Those not capable of expressing their own sorrows and joys of everyday life in poetic words could find a surrogate writer in The Gushing Fountain.
In his collection of poetry, Dr Latoo (who is currently working as a Consultant Psychiatrist in United Kingdom) has catered poems for every mood and occasion: love, parting and sorrow, inspiration, rapture, memory, nature, solitude, and contemplation. Some of them are deeply personal and the author is trying to unearth a time capsule he had left in his native country of Kashmir. Dr Latoo appears to be searching for inner truths and making a self exploratory pilgrimage in his collection of poetry. Poetry has the power to describe and dramatize one’s own life, and Dr Latoo has done it well. The themes generally move from childhood to old age, love to grief, sorrow to joyfulness, aggressive nationalism to corrupted politicians, and depression to psychosis. There are pearls of mystical wisdom embedded in the poetry:
“Be choreographed by a great master for our sustenance Rather than just be a part of random unplanned accident?”
There are wise statements in “Divine Justice”:
“Anything like fatalism shall be a contradiction Of the Divine justice, free will and Lord’s will.”
All poetries have some hidden messages, and the book as a whole stands for immense moral values. “Woman” stands for women’s rights and dignity. The thoughts about the forgotten orphans are heart touching. “Behold a Man - Judging others” points towards the fallacy of judging other people without correcting oneself. Mental health professionals are particularly prone to this error because they are often professionally bound to assess their clients; we are only supposed to assess others and not judge others. We should not even judge ourselves, but only do self-assessment - God is the only Judge. The author writes about very ordinary humble human beings like the barber, Rupa, Ayesha, Ahmed, Puja etc. “Marriage” highlights the sanctity of wedlock. These poetries reflect the world view of the author. “Hold fast to thy dreams” may be inspired by Langston Hughes (1902 - 1967) and reminded me of my father who liked the verses of Hughes on dreams. “A raven who wants to be a dove” refers to people who pretend to be what they are not - wearing borrowed garments.
There are also poems about the author’s travelling experiences. A century ago, if a poet wrote about airport, he would have been frowned upon by the peer group, but in the 21st century it is appropriate to write such poetry. “Noisy airport and my mind” illustrates the hustle and bustle of contemporary life and gives the book a modern flavour. “By the Dal Lake” is nostalgic and the author is trying to recapture and share his lost Kashmiri literary Empire with the readers. Born in Kashmir, there is no surprise that the author renders beautiful nature in his poems. One wonders, if William Wordsworth were born in Kashmir, what would have been the content of his writings. Dr Latoo’s dual identity is evident when he writes about Kashmir and London.
In these days of global union through mere technology, poetry may have a serious role in the “international soul-union.” The days of regional poetry are over. Poets like Dr Latoo may be able to contribute to the formation of a healthier global village; poetry penetrates beyond the psychic realm into the spiritual dimension. There is a mission of peace and love in the Gushing Fountain, and the author is not enforcing any strong convictions on the readers. There is a poet-philosopher in the author of the Gushing Fountain.
The author has used rhyming and free verse styles of poetry. Metaphors and similes are appropriately embedded in various situations:
“A smile on our face blooms the gardens of her innocent soul A tear in our eyes arises from the blood of her bruised heart.” (From the poem, “Mother”)
Lyrical poetries are ravishingly harmonious and there are no repetitions. The thoughts are clear and there is an exotic element in all the poetry. On the whole, all the poems are cerebral and riveting. The works are relevant to the present century and can be appreciated by scholar and casual readers alike. Every poem is an experience to be savoured and memorised. Let these pieces of poetry echo and reverberate not only in the conflict-ridden valleys of Kashmir, but all around the world until they find rest in the minds of the waiting millions.
Psychiatry is going through an identity crisis because the newer medications are not as effective as expected to be and clinicians are turning to different forms of psychotherapy. Poetry/lyric therapy could be another form of psychotherapy that needs attention in the field of soft psychiatry. Dr Latoo’s book could be an inspiration and encouragement in this line of treatment. Hypnotherapists readily recognize that words are like loaded bullets and are highly potent. To an extent, poetry therapy involves the principles of both hetero- and self- hypnotherapy. Primitive and modern religions take advantage of the potentials of different forms of poetry in religious rituals for healing and promoting health.
A study of the mechanism of poetry writing is helpful in developing better conceptual models of creativity and deeper understanding of mental process. Sudden flashes of creative insight and other intuitive leaps, which arise from states of mind through intermediate steps that remain hidden beneath consciousness, and such ultrafast processing involving a concealed intermediate step, is consistent with quantum computations. A poet who enjoys superior mental health is capable of swinging from the unconscious quantum logic to the classical logic of consensus consciousness with an ultrafast speed. In psychotic states, “the quantum gates” do not shut swiftly as in normal mental states and the sufferers get trapped in the quantum logic. The usefulness of poetry therapy in psychotic patients, who get stuck in the quantum logic of the unconscious mind back to the classical logic of ordinary consciousness, needs further analytical studies. The primary aetiology of psychotic disorders may be biological, but secondary symptoms are quantum-linked and the new generation of psychotherapists have to learn the quantum meta-languages to communicate with psychotic and depressed patients. Poetry is such a source of quantum meta-language.
Poetry therapy promotes abstract thinking and develops imaginative powers. It is also a means of relieving and revealing innermost sentiments; it helps to ventilate overpowering emotions and hidden tensions. It is a form of self-expression and aids to build greater self-esteem; useful in strengthening interpersonal skills and communication skills. It would be valuable in repairing the assault of psychosis on the personalities of the sufferers. Quoting from my own memory lane, I became interested in poetry therapy when I comprehended the core problem of a patient who wrote:
“Moon, you shine at the centre of the sky, Catching attention from all over the world, Don’t you know that I am lonely?”
Poetry is of the heart and imagination whereas science is about reason and logic and may be grounded on contradictory principles. If science is about objectivity, poetry is essentially about subjectivity and to blend those human experiences harmoniously is a hard task; Dr Latoo has successfully achieved this goal. A man of science, when he writes poetry, has to liberate himself from the shackles of rationalism so that he can be a wholly free human: to be a poet one has to be a natural human being. To quote from Jean-Jacques Rousseau: “Man is born free and everywhere he is in chains.” Let us hope that the Gushing Fountain will have a part two and even more!
A 19 year old male presented with a history of recurrent respiratory tract infections and progressive diminution of vision. Fundoscopy was performed and showed the changes in image below.
What is the finding suggestive of?
1. Retinitis pigmentosa
2. Drug toxicity
3. Congenital rubella
4. Syphilis
Answer: Retinitis Pigmentosa
Retinitis pigmentosa (RP) is a bilateral inherited progressive retinal degeneration presenting in the first to second decades of life.1 The inheritance can be autosomal dominant, autosomal recessive or X-linked recessive. Hallmark symptoms of RP are nightblindness and visual field constriction. Fundus changes in retinitis pigmentosa include waxy pallor of optic disc (black arrow), arteriolar attenuation (white arrow head) and bony spicule pigmentation (white arrow) in the mid-peripheral fundus, which is predominantly populated by rods. Vessel attenuation is the earliest feature seen clinically. Although intraretinal pigmentary migration is relatively easy to observe, it requires years to develop, so early RP may only exhibit vessel attenuation without pigmentation (previously known as RP sine pigmento). Prognosis is variable and tends to be associated with the mode of inheritance.
Drug toxicity with chloroquine can result in visual disturbances. History of drug usage prior to vision disturbance can be present. Fundus examination shows a subtle bulls eye macular lesion characterized by a central foveolar island of pigment surrounded by a depigmented zone of RPE atrophy, which is itself encircled by a hyperpigmented ring.2 In congenital rubella, a history of maternal infection will be present. Fundus findings include salt and pepper pigmentary disturbance involving the periphery and posterior pole with normal vessels, RPE mottling and no intraretinal pigmentary migration. Syphilitic retinopathy may have sectorial or generalised pigmentation.3 The onset can be from adulthood to old age. History of genital ulcer may be present.
SSRIs (Selective Serotonin Uptake Inhibitors) are very commonly used in Depression and Anxiety. Though considered as safest antidepressants, they have some common side effects which include gastrointestinal side effects, headache and at times sexual dysfunction. Yawning is one of the rare side effects of SSRIs. SSRIs were found to be the commonest cause of not so common drug induced yawning in a meta-analysis1. Isolated cases of intractable yawning have been reported with citalopram2 fluoxetine, citalopram and sertraline3 in the literature .Excessive yawning can cause injury to Temporo-Mandibular Joint (TMJ) 4. Paroxetine has also been shown to cause intractable yawning5. Yawning possibly helps in thermoregulation and is an unconscious effort by the body to cool the brain 6, 7. It is known that yawning can be contagious. Reading, talking, seeing someone yawn or even thinking about yawning can induce yawning in the subjects8. Susceptibility to contagious yawning is different for different individuals depending upon their ability to process information about self9.
Case
A 60 year old postman presented with his first episode of depression. He attended the GP who started him on sertraline (an SSRI). He developed serious headaches and did not notice any therapeutic benefit. He was then referred to the psychiatric services for further management. He was assessed, Sertraline was stopped and Cipramil 20mg was introduced. He was reviewed after 2 months and the dose was increased to 40 mg to which he responded partially but relapsed within 4 months. There were no changes in his psycho- social circumstances. Cipramil was stopped and he was started on fluoxetine 20 mg. Once again the response was partial and was overshadowed by midnight insomnia and increased sleepiness in the daytime. Fluoxetine was increased to 40 mg and he was reviewed after 4 months when he reported clear and significant improvement in his depression but complained of “excessive yawning spells” causing him problems at his work place. The psychiatrist was surprised at the number of times he yawned at the Out Patient Clinic review. On further discussion it became clear that this side effect had become highly troublesome. He complained that his jaw was in severe pain. He was unable to do his delivery rounds and was having clear episodes of attention lapses leading to letters being put to wrong addresses. He was transferred to “sorting” the post at sorting counters and was taken off delivery rounds. Even here the intractable yawning continued and he was committing sorting errors. By now it was affecting his colleagues too and they also started yawning (it is known to be contagious).It was affecting his self-confidence and was extremely embarrassing in all social situations to an extent that he started avoiding social interactions. He was drowsy all the time. He was clearly suffering more due to excessive yawning than due to depression. He was unable to perform his employment duties and was signed off sick. At that point the dose of fluoxetine was reduced to 20 mg .After a couple of weeks his yawning reduced significantly but was still disruptive to his routines. He was advised to slowly taper off fluoxetine over next 4 weeks. Unfortunately his depression relapsed and his GP restarted him on Fluoxetine 20 mg. He was reviewed by the psychiatrist after a couple of weeks. Once again he reported return of intractable yawning.
Fluoxetine was stopped once again and he was started on Mirtazapine 15 mg. There was very little response. The dose was increased to 30 mg after around two weeks. This led to him to experience nausea and vomiting. Unfortunately Mirtazapine too had to be stopped. He was then tried on amitriptyline 50 mg which improved his sleep and symptoms of Depression. He was reviewed in the outpatient clinic after a couple of months .He did not develop any side effects and responded quite well. He then started his job starting from part time to full time within 6 weeks. After 6 months on the same dose of amitriptyline, did not have any symptoms of depression and was finally discharged from the mental health services.
Discussion
SSRI is the first line antidepressants used in the treatment of depression and Anxiety disorders. They are known to have least side effects and safest when it comes to overdosing. Intractable Yawning is quite an unusual and uncommon side effect. One has to be conscious of the fact that it may cause yawning that can be pathological and can cause severe disruption of patient’s life. It can contribute to poor compliance. It is quite easy to overlook and ignore this side effect as yawning usually seems to represent sleep problems which is also a significant feature of the associated depression itself.
Excessive yawning can cause Jaw/facial pain. It can even cause dislocation of temporo-mandibular-joint. It can cause severe problems with one’s work and self-esteem. The sufferer might be misunderstood for being inattentive, indolent and sluggish. It might affect relationships with spouse/friend/relatives and especially at place of work. It can be misunderstood by doctors and lead to unnecessary tests and investigations. One has to be aware when prescribing SSRIs in patients who are driving or are involved in handling heavy machinery, athletes, airline pilots, surgeons, life guards, air traffic controllers and many other professionals. Due to its contagious nature, it’s not only the patient who is affected but also others around him. Excessive yawning can adversely affect the level of arousal, the level of concentration and work efficiency leading to poor performances in tasks requiring undiverted attention.
Hence excessive or intractable yawning has to be kept in mind while prescribing the so called most safe anti-depressant class of medication, the SSRIs, in this case fluoxetine.
The Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure has long reported chronic hypertension as affecting over one billion individuals worldwide1. While the role of primary care providers in the long term management of this ubiquitous condition cannot be overstated, the hypertensive patient can also present challenges to an acute physician when the control of arterial blood pressure reaches crisis level.
The What
The clinical entity extravagantly referred to as a hypertensive crisis describes an elevated systolic blood pressure of >180mmHg with diastolic pressure of >120mmHg. Within this category of acute presentations, two subcategories are defined – the hypertensive urgency and the hypertensive emergency. Flamboyant terminology aside, what distinguishes the latter ‘emergency’ from the former ‘urgency’ is evidence of acute end-organ damage. Emergencies therefore include various incipient pathologies of the cardiovascular, renal and central nervous systems. Fortunately these are less common encounters for receiving physicians, with a recent large multicentre study identifying acute pulmonary oedema (30.9%), myocardial infarction (17%), acute aortic dissection (7.9%), acute kidney injury (5.9%), cerebrovascular accident (22%) and hypertensive encephalopathy (4.9%) as features of hypertensive emergencies in 25.3% of hypertensive crises, with the remainder of the presenting population demonstrating a hypertensive urgency with inherent lack of evidence of end organ damage2.
The Why
The pathophysiology of acute hypertension remains yet to be fully elucidated, however authors in the field of hypertensive crisis3,4 appear to converge on the point of two common proposed pathophysiological events. A sharp elevation in systemic vascular resistance is thought to be one precipitating factor, with an aberrance of cerebral autoregulation of blood flow being another.
For the purposes of an acute clinician faced with a bleeping blood pressure monitor, what is perhaps more applicable to everyday clinical practice is the potential role of non-adherence to regular antihypertensive medications5,6as discussed below.
The Who
A longitudinal study carried out in Switzerland and led by Saguner7identifies several potential risk factors for manifestation of a hypertensive crisis. Female gender, obesity and concurrent somatoform disorder accompany hypertensive and coronary artery related cardiac disease as potential red flags. Perhaps unsurprisingly, a history of multiple antihypertensive therapies was also associated with greater likelihood of presentation with hypertensive crises, as was non-adherence to the same therapeutic regimen. The latter compliance related issue was identified as the most significant by the study’s authors.
Elderly patients and also those of African American ethnicity have been shown to demonstrate higher rates of hypertensive crises in general8, while Caucasian patients are reported to have higher rates of emergencies as opposed to the more benign urgency equivalent9.
The When
The findings of a comparatively small Italian hospital-based study10utilising 360 patients were recently supported by a larger United States-based analysis11of over 400,000 patients, with a seasonal variation in presentation of hypertensive crises noted. A winter peak and summer trough was reported by both groups of authors, suggesting transcontinental extrapolation of a potential seasonal phenomenon.
Evaluation
Comprehensive disposition notwithstanding, acute physicians are urged to adopt a targeted approach when considering a presentation with alarming blood pressure readings.
Present…
By nature of definition, the presentation of a hypertensive crisis encompasses a wide variety of symptomatology depending on whether a hypertensive urgency or incipient emergency is manifested.
The symptomatology of a patient demonstrating hypertensive urgency can be fairly non-specific to acute blood pressure elevation. A 2014 study into clinical presentation of hypertensive crises reported headache as the most prevalent symptom (74.11% of patients), followed by chest discomfort and dyspnoea (62.35%), vertiginous dizziness (49.41%), nausea and emesis (41.47%)12 as demonstrated in Figure 1.
Figure 1. Symptomatology in hypertensive crises (adapted from Salkic S, Batic-Mujanovic O, Ljuca F, et al12)
While all of these common presenting complaints can bring a patient to a physician’s attention, what often alerts the attending physician to the particular possibility of an acute hypertensive condition is the blood pressure reading obtained on initial assessment of the patient (for instance for triage purposes) even in the absence of overt symptomatology as reported above. Indeed, patients with minimal symptomatology may be prompted to present themselves for acute medical care by no more than the sounding of an ominous alarm on a home blood pressure reader or the disconcerted look of a perturbed primary care physician, sphygmomanometer in hand!
…and Past
The history taking process of an acute physician faced with a hypertensive crisis should target several key areas which may prove essential in differentiating a case of urgency from an evolving emergency. With the potential for end organ heart, kidney and brain-related complications in mind, a physician should probe the possibility of chest discomfort, dyspnoea and signs of congestive cardiac failure (as indicators for incipient cardiovascular complications), headache, visual changes, dizziness and altered consciousness (potential harbingers of neurological complications) as well as recent history of oliguria as a marker of possible related renal insult.
Having conducted an interrogation for worrisome symptomatology, evaluation should proceed to a ‘hypertension history’. Prior diagnosis of hypertension and hypertensive crises in particular should be elaborated on, with this including a history of any prescribed regular antihypertensive therapy and both the adherence to and effect of the latter. Relevant to the notorious polypharmacy patients, any history of concurrent medication use must be clarified so as to give an indication of potential interactions.
Of historical note is the potential for hypertensive crisis following interaction of tyramine with mono-amine oxidase inhibitors (the so-called cheese effect), while a provoked hypertensive crisis more relevant to modern medicine is the potential effect of illicit substances including cocaine and amphetamine-based products13.
Examination
As with the evaluation of the hypertensive crisis patient’s history, examination should place particular emphasis on distinguishing urgency from emergency.
Parameters
Assessment of vital signs can provide valuable indicators. Whilst initial systolic pressure is not necessarily a predictor of the ability to achieve a prespecified target range pressure within thirty minutes14, the presence of tachycardia has been shown to be an ominous sign more prevalent in emergency than urgency, with a strong statistical association demonstrated with hypertension-related left ventricular failure15.
Physical
Cardiovascular examination should assess for the presence of signs of cardiac failure (including an elevated jugular venous pressure, added S3 heart sound or pulmonary rales) as well as the feared asymmetric pulses or new mid-diastolic murmur associated with aortic dissection. Auscultation for renal bruits should be performed, and a neurological assessment for possible stroke indicators undertaken.
Whilst chronic hypertension patients will often have subtle fundoscopic abnormalities, ophthalmological review for evidence of acute changes including new retinal haemorrhages or exudates together with papilloedema should be carried out.
Investigation
The unique circumstances of individual presentations aside, the prompt acute medical investigation of a hypertensive crisis should include a minimum number of bedside, laboratory and imaging investigations16as suggested in Figure 2. Comparison of each of these to pre-existing baseline investigations may be invaluable in giving an indication of level of acute pathology and therefore care required.
Figure 2. Investigations in hypertensive crises
Bedside
Electrocardiography affords rapid exclusion of major acute ischaemic cardiac events, as well as providing an indication of chronic hypertrophic changes and a quantitative indicator of heart rate elevation. Simple dipstick urine testing can assist in exclusion of significant proteinuria pending formal urinalysis studies16.
Laboratory
Full blood count analysis will give an indication of haemoglobin level where dissection is suspected, while serum markers of renal profile including creatinine level in particular may suggest varying degrees of acute kidney injury where present. Cardiac biomarkers may complement electrocardiography in exclusion of acute events.
As ever, a metabolic panel and blood gas analysis represent valuable tools in the acute physician’s arsenal where acute and evolving physiological disturbances are suspected.16
Imaging
Presence of pulmonary congestion in keeping with left ventricular failure as well as the mediastinal widening of an aortic dissection may be assessed via simple chest radiography. More complex imaging such as computerised tomographic (CT) scanning may be indicated as dictated by clinical presentation, as in the event of neurological manifestations16.
Treatment
Established guidelines1 suggest definitive management of a hypertensive emergency should involve lowering of blood pressure by 25% in the first hour and then to 160/100-110mmHg thereafter if stable, as indicated in Figure 3. Meticulous and continuous monitoring in an intensive care setting for parenteral administration of antihypertensive agents including labetalol17, clevidipine18–20 and fenoldopam21 is beyond the scope of most practising acute physicians.
Figure 3. Broad management of a hypertensive emergency (adapted from Chobanian A V, Bakris GL, Black HR, et al1 and Börgel J, Springer S, Ghafoor J, et al26
Hypertensive urgency, however, need not require such invasive interventions, with oral therapy utilising labetalol, captopril or clonidine followed by a period of vigilant observation usually proving sufficient1,17. A once popular practice of oral nifedipine is advised against, owing to the precipitous drop in pressure with inherent risk of tissue ischaemia observed on administration of this agent1. Emergent pharmaceutical options including novel felodipine formulations22 may also be considered.
A pitfall of physicians, perhaps, panicked by the jargon ‘hypertensive urgency’ has been observed, with inappropriate management in such cases reported in multiple independent studies in recent years23–25, with a 42.6% appropriate treatment rate in one study25. A chief consideration when faced with hypertensive crises therefore, may be to avoid rash intervention.
Worthy of mention is the potential for common co-prevalent secondary causes of hypertension including sleep apnoea, renal artery stenosis or a state of hyperaldosteronism; present in 15% of cases in one series26, recommendations have been made for consideration of these prior to therapeutic intervention26.
Outcome
There…
Indicators of greater likelihood of admission in patients presenting with severe hypertension may include presence of age >75 years, dyspnoea, altered mental status or creatinine elevation27.
…And Back Again
Following discharge after an admission for acute severe hypertension, a 90-day readmission rate of up to 35% has been reported28; this includes a multiple readmission rate of 41% with similar re-presentation accounting for 29% of this data. Curiously, dyspnoeic initial presentation is emphasised by the same data source as a risk factor for readmission, with additional risk factors including ictal phenomena at initial presentation and history of both drug abuse and prior severe hypertensive admission.
Key Points
Definition
A hypertensive crisisinvolves pressures of >180mmHg systolic and >120mmHg diastolic
Ahypertensive urgency does not include end organ damage
A hypertensive emergency implies end organ damage
Symptomatology
The commonest symptoms are headache (74.11%), chest discomfort & dyspnoea (62.35%), vertiginous dizziness (49.41%) and nausea & emesis (41.47%)
Investigations
Bedside should include urinalysis and echocardiography
Laboratory should include creatinine level
Imaging should include plain chest radiography
Management
Blood pressure should be lowered by 25% over the first hour
In hypertensive urgency, oral therapy is often sufficient
Dermatomyositis (DM) is a rare autoimmune process with not yet fully understood aetiology. It is characterised by a combination of striated muscle inflammation and cutaneous changes. The pathogenesis of the cutaneous manifestations of DM is not well understood either. DM occurs in all age groups. Therefore, two clinical subgroups of DM are described: adult and juvenile. The adult form is predominant among female patients with a clinical presentation which includes a Heliotrope rash (Fig. 1), Gottron’s papules (Fig. 2), nail fold telangiectasia and other various cutaneous manifestations in association with inflammatory myopathy.1 In addition to the previous mentioned symptoms, juvenile patients also commonly suffer from ulcerative skin and recurrent abdominal pain due to vasculitis. An increased occurrence of oncological processes in combination with adult DM has been observed with a slight predominance for the female gender.2 These patients carry a higher risk for comorbid cancers. The most common ones include malignant processes of the ovary, lung, pancreas, stomach, urinary bladder and haematopoietic system.3 The significance of these observations is that the development of DM should raise suspicion with regard to a possible parallel oncological process.
Figure 1
Figure 2
Materials and Methods
A retrospective consecutive case series was performed on a group of 12 patients that were hospitalised at the Department of Dermatology, Venereology and Allergology at the Medical University of Gdansk between 1996 and 2013. The diagnostic criteria for DM included: hallmark cutaneous lesions of DM, clinically significant muscle weakness evaluated by electromyography (EMG), indicative laboratory findings - muscle enzymes, muscle biopsy, autoantibodies. All 12 cases had muscle biopsy, serum studies and EMG performed. The retrospective study analysed the age and sex of the patients, course of the disease, accompanying diseases, clinical picture and treatment. The patients with malignancies were analysed by the primary organs of origin, and the period between the diagnosis of DM and that of malignancy (Table 1).
Table 1. Patient characteristics
No.
Sex
Previous medical history
Age of onset of DM
Clinical picture
Diagnostics
Treatment
Malignancy and age at diagnosis
1
F
Chronic eosinophilic leukaemia
54
Muscle weakness of shoulder and hip area, facial oedema and erythema, palmar erythema
CK 2550, ANA Hep-2 1:640, LDH 901, AST 69, ALT 143, X-ray = N, USG = N, EMG = N
Azathioprine, Prednisone
Stage IIA ovarian cancer at 55
2
F
Peptic ulcer disease
66
Facial erythema, Gottron’s papules on the hands, muscular weakness creating difficulty in movement, weight loss, decreased appetite
ANA Hep-2 1:1280, CT = N, EMG = N
Glucocortico- steroids
Small cell carcinoma at 66
3
F
None
23
Muscular weakness of shoulder and hip area; difficulty in standing up and walking up stairs, Gottron’s papules, Heliotrope rash, upper chest erythema
ANA Hep-2 1: 2580, CPK 12022; AST 595, ALT 210, CK-MB 534; Jo 1 = N, Mi = N
Azathioprine, Prednisone Methotrexate
None
4
F
Chronic obstructive pulmonary disease
42
Muscular weakness of shoulder and hip area, facial oedema and erythema
Cyclo- phosphamide, Methyl- prednisolone
Stomach tumour at 43
5
F
None
22
Muscle weakness, painful extremities, facial oedema and erythema
ANA Hep-2 = N, CT = N
Cyclo- phosphamide, Prednisone
None
6
F
None
42
Muscle weakness, paraesthesia of hands, facial oedema and erythema
ANA Hep-2 1:640
Cyclo- phosphamide, Prednisone
None
7
F
Hypertension, diabetes type II, osteopenia, leiomyoma.
65
Muscle weakness of shoulder and hip area, facial oedema and erythema
ANA Hep-2 1:1280, LDH 650
Cyclo- phosphamide, Prednisone
None
8
F
Hyper-thyroiditis
46
Muscle weakness; difficulty in moving, facial oedema and erythema
ANA Hep-2 1:160
Cyclosporine A, Prednisone
None
9
F
Autoimmune hepatic disease, leiomyoma.
45
Muscular weakness of shoulder and hip area, facial oedema and erythema
ANA Hep-2 1:2560, CK 3700, Mi-2 = P
Azathioprine, Methyl-prednisolone
None
10
F
Hypertension, diabetes type 2, hypo-thyroidism, ovarian cysts
57
Muscle weakness of shoulder and hip area, facial oedema and erythema, upper chest erythema, Gottron’s papules, Gottron’s papules, fatigue, dysphagia
ANA Hep-2 1: 640, CK 747, LDH 363, AST 78, Ro52 = P, Mi 2 = N, Jo 1 = N, PM/Scl = N, CT= two pulmonary lesions that were biopsied and diagnosed as pneumoconiosis
Prednisone, Methotrexate
Cervical Carcinoma at 51, Breast Cancer at 57, Pulmonary Metastasis at 58
No. = number (patient), DM = dermatomyositis, F = female, M = male, CK = creatine phosphokinase, ANA = antinuclear antibodies, LDH = lactate dehydrogenase, AST = aspartate transaminase, ALT = alanine transaminase, N = negative, P = positive, USG = ultrasonography, EMG = electromyography, CT = computerised tomography
Limitations
The small sample size is a significant limitation in this retrospective analysis. DM is a rare disease with a prevalence of 1:1000. Increasing sample size, by combining cases from multiple institutions, and implementing control would further strengthen the presented material.
Results
The average age of onset of the disease was 48 years. All 12 subjects were female. Previous medical history included chronic eosinophilic leukaemia, diabetes mellitus type II, hypertension, leiomyomas, hypo- and hyper- thyroid disease, chronic obstructive pulmonary disease, peptic ulcer disease, autoimmune hepatitis and osteopenia. The two most common are diabetes mellitus type II and hypertension. The clinical picture of each case was similar in that all of the patients presented with some form of muscle weakness. In addition, typical features of DM with Gottron’s papules, periorbital oedema, facial oedema and erythema were noted in five patients. Antinuclear Antibodies (ANA) Hep-2 of values >1:160 were identified in nine patients. Additional laboratory markers such as creatine kinase (CK), lactate dehydrogenase (LDH), aspartate transaminase (AST) and alanine transaminase (ALT) were elevated in five patients. Two patients had muscle biopsies performed. The immunohistopathology picture consisted of Immunglobulin G (IgG), fibrinogen, C1q, and C3 deposition around the perimysium and granular deposits of Immunoglobulin M (IgM) in the dermal epidermal junction. Of the 12 patients, four had neoplasms in addition to the diagnosed DM. The primary cancers were originating from the cervix, breast, stomach and ovary. Of these four patients, all had the diagnosis of DM prior to the diagnosis of a malignancy.
Discussion
The diagnosis of DM is made by combining the clinical picture with the results of various laboratory findings: skin and muscle biopsies, EMG, serum enzymes and ANAs.
The clinical picture varies. The typical dermatological presentation consists of a erythematous and oedematous periorbital rash - the Heliotrope rash (Fig. 1). Symmetrical redness and flaking can be observed on the elbows and dorsal sides of the phalanges, especially over the distal metacarpal joints - Gottron’s papules (Fig. 2). Erythematous lesions can also be found on other locations such as the face, upper chest and knees.4 The dermatitis heals with atrophy, leaving behind areas that resemble radiation-damaged skin. The striated muscle inflammation most often involves the shoulder and hip area, leading to muscle weakness and atrophy. The intercostal muscles and the diaphragm may be involved causing alarm with regards to respiratory compromise. Dysphagia can be present due to inflammation of the smooth and skeletal muscles of the oesophagus. These inflammatory processes often lead to muscle calcification.5 The sum of all these changes clinically is seen most often as weakness, weight loss and subfebrile temperatures. All patients in our study had co-existing muscle and cutaneous symptoms, with variation in severity and localisation. Five patients had the classical picture of shoulder and hip area weakness. The rest of the patients had a more general muscle weakness. Two patients had atypical complaints of hand paraesthesia and extremity pain respectively.
Subtypes of DM exist for the purpose of epidemiological research and sometimes prognosis. They are categorised by the clinical presentation and presence or absence of specific laboratory findings. These subtypes are as follows: Classic DM, Amyopathic DM, Hypo-amyopathic DM and Clinically Amyopathic DM. These subtypes have little impact on routine diagnosis. Common laboratory findings in DM are enzymatic elevation of CK, AST, ALT and LDH; these mainly reflect the muscle involvement. Amyopathic DM lacks both abnormal muscle enzymes and weakness.6 Enzymatic elevation may sometimes precede the clinical symptoms of muscle involvement. Hence, an enzymatic raise in a patient with a history of DM, should raise suspicion of recurrence. Positive ANA findings are frequent in DM but not necessary for diagnosis. More myositis-specific antibodies include anti-Mi 2 and anti-Jo 1. A typical histopathological examination shows: myofiber necrosis, perifascicular atrophy, patchy endomysial infiltrate of lymphocytes and occasionally the capillaries may contain membrane attack complexes.7
Cutaneous changes and muscular complaints can correspond to: 1. Systemic scleroderma which often has a positive ANA; 2. Trichinosis, in which periorbital swelling and myositis occurs, but there is a prominent eosinophilia and a history of consuming undercooked swine or bear meat; 3. Psoriasis with joint involvement which may give a clinically similar picture to DM. However, the skin changes in psoriasis have a more flaking pattern. In doubtful cases, a skin and muscle biopsy together with an electromyography will set the diagnoses apart. A facial rash may also be observed in systemic lupus erythematosus together with nail fold telangiectasia. They are usually distinguished by a clinical picture with more organ system involvement in systemic lupus and by serological studies. A drug-induced picture of DM exists and is particularly associated with statins and hydroxyurea.8
It is estimated that around 25% of DM cases are associated with a neoplastic process that can occur prior, during or after the episode of DM. The risk of developing a malignancy is highest in the first year of DM and remains elevated for years after diagnosis. 9, 10, 11 This was the case with patient number 1, 2 and 4 in our study, where the malignant process appeared in the first year following onset of DM. Risk factors seen in DM patients include male gender, advanced age and symptoms of dysphagia.12 The age range of the four patients in our study with malignancy was between 43 and 66. Symptoms that clinically raised suspicion of a malignant process included weight loss, lack of appetite and dysphagia. All neoplasms were discovered within one year after the diagnosis of DM was made. One patient had a previous history of cervical cancer, six years prior to the onset of DM.
The most common neoplasms seen in patients with DM vary in the world. In Europe the malignancies are located mainly in the ovaries, lungs, and stomach. The cancer types associated with the DM correlate with common cancers seen in the same area. For instance, in Asia, nasopharyngeal carcinoma (which is a rare malignancy in Europe) is a frequent occurrence in DM.1, 3 The location of neoplasms seen in our study varied from gastric, breast, ovary and pulmonary. The screening in regards to malignancies in patients with DM is individualised and should be based on risk factors such as previous malignancies, alarming symptoms such as weight loss or dysphagia, or abnormal findings on physical exam. This was the case with patient number 10 in our study who had a previous history of cancer, and patient number 2 who had symptoms of weight loss and decreased appetite. Initial screening was negative for patient number 1 and 2, where the malignancy developed first after the onset of DM. Age-appropriate screening with mammography, faecal-occult blood test and Papanicolaou smear should be considered. Additional investigations with chest films, computerised tomography (CT) scanning of chest, abdomen or pelvis; colonoscopy, cancer antigens; and gynaecological ultrasonography should be done when indicated.
The main objective of treatment in DM is to improve muscle strength and obtain remission, or at least clinical stabilisation. No specific protocol exists with regard to treatment of DM. Treatment is individualised and adapted to the specific condition of the patient. High-dose corticosteroids are the basis of treatment. However, randomised placebo clinical trials failed to show their efficacy. Clinical efficacy of corticosteroid therapy demonstrates itself and hence is the initial treatment of choice. Doses start at around 1 mg/kg/day depending on the corticosteroid of preference. This dosing is maintained for approximately two months until clinical regression is achieved, followed by approximately 10 mg decrease in dose for the coming three months. A maintenance dose of approximately 5-10 mg should be achieved. The exact parameters are patient-specific. In the case of a severe flare of dermatomyositis, 1 g per day for three days of methylprednisolone intravenous pulses can be administered. The systemic effects of long term therapy with corticosteroids have to be kept in mind. Hence, yearly dual-energy X-ray absorptiometry bone scans can be administered to monitor the development of osteopenia.
Further treatment options are offered in situations where the initial disease presentation is severe, involves internal organs, if relapse occurs during steroid dose reduction, and steroid side-effects. It has been proposed that combination therapy is a better method of approach due to lower reported relapse rates and lower need to use high-dose corticosteroids. Methotrexate is second-line therapy when steroids fail alone. Methotrexate is used with a maximum dose of 25 mg per week plus folate supplementation. The limitations of Methotrexate are immunosuppression and pulmonary fibrosis. Methotrexate is considered preferable to Azathioprine because the latter has a longer onset of efficacy. Azathioprine is administered at doses ranging from 1.5 - 3 mg/kg/day and has a side-effect profile is similar to that of other immunosuppressants. Cyclosporin A is a T-cell cytokine moderator that has a similar efficacy profile to Methotrexate. Side-effects include renal impairment, gingival hyperplasia, and hypertrichosis. Dosing of Cyclosporin A ranges from 2 - 3 mg/kg/day.
An expensive but effective and rather low side-effect alternative is intravenous immunoglobulins. The dosage of this medication has not been officially established in the treatment of DM, but options are: 2 g/kg given either in 1 g/kg/day for two days every four weeks; or 0.4 mg/kg/day for five days initially, and then for three days monthly for three to six months. Other alternatives include Mycophenolate Mofetil, Cyclophosphamide, Chlorambucil, Fludarabine, Eculizumab, Rituximab.9 Further options might be treatment targeted toward malignancy when associated with DM. This was observed in our patient number 10, where full remission of DM was obtained first after lobectomy and chemotherapy for the mammary carcinoma.
Conclusion
DM mainly affects women and all 12 cases presented in our study were female. One third of our cases had malignancies associated with their course of DM. We conclude that it is reasonable to screen these patients, especially in those with already established cancer risk factor. Age-appropriate screening and beyond is indicated by high risk factors or clinical presentation. High suspicion should be raised in patients with a previous history of oncological treatment since DM can be the first clinical sign of cancer recurrence.
An audit and re-audit on the monitoring of the physical health of patients on antipsychotic medication in the Early Intervention in Psychosis Service of the 5 Boroughs Partnership NHS Foundation Trust
Introduction
A growing number of studies suggest a causal relationship between antipsychotic treatment and metabolic disturbances. The most frequent problems linked to antipsychotic drugs have been abnormalities of glucose metabolism such as insulin resistance, hyperglycaemia or new onset diabetes mellitus and dyslipidemia, including increased levels of total cholesterol, LDL-cholesterol and triglycerides.1
Developing effective models of identifying and managing physical ill health among mental health service users has increasingly become a concern for psychiatric service providers. Individuals with Serious Mental Illness (SMI) defined as any Diagnostic and Statistical Manual (DSM) mental disorder leading to substantial functional impairment, have higher than expected risks of physical morbidity and mortality in comparison with members of the general population.2 People with mental health problems such as Schizophrenia or Bipolar Disorder have been shown to die on average 16 to 25 years sooner than the general population.3 One set of explanations for these vulnerabilities points to the lifestyles of people with serious mental illnesses, which are often associated with poor dietary habits, obesity, high rates of smoking, and the use of alcohol and street drugs.4 Illness related factors have also been cited. It has been suggested that individuals with serious mental illness are less likely to spontaneously report physical symptoms.5 Poor physical activity has also been shown to be a common occurrence in people with serious mental illness.6, 7
A greater inherent predisposition to develop metabolic abnormalities coupled with metabolic adverse effects of antipsychotic drug treatments may negatively influence physical health.8 Many of these problems can be avoided if close attention is paid to the physical health of patients on antipsychotic treatment. A longstanding debate persists concerning who is responsible for the physical care of patients with serious mental illness. Psychiatrists and physicians are advised to play an active role in ensuring that patients with mental illness are not disadvantaged.9
The Warrington and Halton Early Intervention in Psychosis Team is based in the 5 Boroughs Partnership (5BP) NHS Foundation Trust in the North West region of the United Kingdom, and in collaboration with Advancing Quality Alliance (AQuA), they embarked on a joint audit between November 2012 and May 2013 with the aim of reviewing the practice regarding the routine monitoring of physical health of service users on antipsychotic treatment. The study set out to reduce the cardio-metabolic effect of antipsychotic medication in service users. The study was also aimed at contributing to a reduction in the mortality rates in people with severe mental illness as well as testing out approaches to improve the physical health of people with serious mental illness who are receiving care from the Early Intervention in Psychosis Teams. The promotion of a more integrated approach to the physical health care of people with a SMI was also targeted.
Method
In November 2012, the Warrington and Halton Early Intervention in Psychosis service (EIP) conducted the initial audit, designed by AQuA as a baseline measure of the current standard of physical health screening amongst the Early Intervention patients in the two boroughs. The recommendations from the National Institute for Health and Care Excellence (NICE) and Maudsley prescribing guidelines were the frameworks for the AQuA design. The Research and Audit Governance Group in the 5 Boroughs Partnership NHS Foundation Trust approved the audit.
A retrospective review of the clinical records of all patients opens to the EIP, who were prescribed antipsychotics, was undertaken. Six physical health parameters were examined and these include; serum lipid profile and blood glucose levels. Others measures were body weight, height, Body Mass Index (BMI) and blood pressure. These parameters were entered into the survey monkey audit tool developed by AQuA.
Other items audited were the frequency of screening, the number of physical health parameters evaluated at each period of recording and the smoking status of the service users. Clinical records were checked for documented history of physical illness in all patients. The number of service users receiving physical health interventions as a result of the screening and the number of service users who were offered physical health interventions at the screening but either refused treatment or did not respond to the referral was also recorded. The results were presented at a Trust-wide forum and recommendations were made, and disseminated shortly afterwards. A re-audit was done in May 2013.
Results
Table 1, summarises the demographic details of patients at baseline and re-audit. 55 patients were involved in the baseline audit and 52 patients were involved in the re-audit. No significant differences were observed in both audits in terms of gender distribution and age. Majority of the patients involved in both audits were of white British ethnicity.
Table 1: Demographic details of patients at baseline audit and re-audit
Nov 2012
May 2013
Total number of patients
Male : female
35:20
22:30
Age
14-36
15-36
White British Ethnicity
52
48
Baseline audit: November 2012
Screening and monitoring
The table below indicates the number of service users receiving a screening for weight, height, BMI, glucose blood levels, lipid blood levels and blood pressure at the 4 week, 3 month, 12 month and 24 month assessments.
Table 2: Physical health screening of service users at baseline
4 weeks recorded screening
3 months recorded screening
12 months recorded screening
24 months recorded screening
1 type of screening
5 (9.1%)
12 (21.8%)
18 (32.7%)
18 (32.7%)
2 types of screening
14 (25.5%)
17 (30.9%)
5 (9.1%)
5 (9.1%)
3 types of screening
4 (7.3%)
4 (7.3%)
3 (5.5%)
6 (10.9%)
4 types of screening
5 (9.1%)
3 (5.5%)
5 (9.1%)
3 (5.5%)
5 types of screening
4 (7.3%)
0
1 (1.8%)
4 (7.3%)
6 types of screening
4 (7.3%)
3 (5.5%)
4 (7.3%)
2 (3.6)
There was no screening recorded for 19 (34.5%) patients at 4 weeks, 16 (29%) patients at 3 months, 19 (34.5%) patients at 12 months and 17 (30.9%) patients at 24 months.
Smoking status of service users
Based on the analysis of those referred to the smoking cessation service, it was concluded that around 35% of service users within the EIP Service smoke. The findings from this data also indicate high refusal rates to smoking cessation programmes (at over 80% of those service users who confirmed that they smoke).
Documented history of physical illness
The presence or absence of physical illness was documented in the records of 35 patients. Where physical health problems were identified, patients were offered a number of interventions. These include referral to the dietician/exercise programmes, smoking cessation and referral to primary care services for illnesses such as, hypertension, diabetes and hyperlipidemia.
Table 3, summarises the types of interventions available for patients when physical health issues were identified. A number of patients (N/A) required no interventions, as physical problems were not identified.
Number of service users receiving physical health interventions
Table 3: Physical health interventions
Yes
No
N/A
Referral to dietician/exercise programme
15 (28.8%)
26 (50%)
14 (25.5%)
Treatment for Diabetes
0
22 (45.8%)
33 (60%)
Treatment for
Hyperlipidemia
2 (4.2%)
23 (47.9%)
30 (54.5%)
Treatment for
Hypertension
0
22 (45.8%)
33 (60%)
Help with smoking
cessation
12 (24.5%)
19 (38.8%)
24 (43.6%)
Re-audit: May 2013
Screening and monitoring
The table below indicates the number of service users receiving a screening for weight, height, BMI, glucose blood levels, lipid blood levels and blood pressure at the 4 week, 3 month, 12 month and 24 month assessments. The table shows that 29 patients had their screening recorded at 4 weeks, 19 (66%) of which had 6 types of screening. At 24 months, out of the 16 patients who had their screening recorded, 15 (95%) had 6 types of screening. Patients with no screening parameters were omitted.
Table 4: Physical health screening of service users at re-audit
4 weeks recorded
screening
3 months recorded screening
12 months recorded screening
24 months recorded screening
1 type of screening
2 (7%)
0
0
0
2 types of screening
2 (7%)
2 (8%)
1 (4%)
1 (5%)
3 types of screening
1 (3%)
1 (4%)
3 (11%)
0
4 types of screening
3 (10%)
3 (12%)
1 (4%)
0
5 types of screening
2 (7%)
1 (4%)
1 (4%)
0
6 types of screening
19 (66%)
18 (72%)
21 (77%)
15 (95%)
Smoking status of service users
The overall data confirms that 25 patients, who were identified as smokers, were offered smoking cessation, 19 of which refused, thus giving an overall refusal rate of 76%
The table below compares the results of both audits with respect to “6 types of screening” done at 4 weeks, 3 months, 12 months and 24 months. The result shows an overall improvement over the audit period.
Comparing results of both audits with respect to “6 types of screening”
Table 5: Comparison of screening results
November 2012
May 2013
4 weeks
4 (7.4%)
19 (66%)
3 months
3 (5.5%)
18 (72%)
12 months
4 (7.4%)
21 (77%)
24 months
2 (3.7%)
15 (95%)
Discussion
The first audit revealed a suboptimal screening of the 6 targeted parameters at 4 weeks, 3 months, 12 months and 24 months in the service users audited when compared to the recommendations of the Maudsley guidelines (See Table 3). Some of the issues identified are summarised in the table below;
Table 6: Issues identified following the first audit
Sporadic health and wellbeing sessions
Ad-hoc physical health checks prior to commencing antipsychotics
Physical health screening was not perceived as priority
Physical screening were unsystematic and erratic
Poor referral links with local health promotion programmes
Poor attendance to physical health screening appointments
Poor recording of screening tests
Inadequate links with primary care services
Psychiatric clinics poorly equipped with instruments for basic health screening
No clarity about who takes responsibility for screening: Psychiatrists or GP?
Patients’ lack of interest and motivation in the screening process
SMI register not up-to-date
Recommendations made following the initial audit are outlined in the table below;
Table 7: Recommendations following the first audit
Need to finda comprehensive screening tool
Development of a documentation system
Building an alert system to remind when physical health checks are due
Improvement of links with primary care services
A more robust approach to ensure patient’s attendance at screening clinics
Improvement of links within secondary care agencies
Identification of further skills needed within the team e.g. venipuncture, ECG
A Plan, Do, Study, Act (PDSA) model was used which was useful in clarifying issues and actions needed.10 It helped us to identify issues and actions needed including:
1. Establishing physical health as a priority within the EIP
2. Involvement of primary care and health promotion
3. Establishing a database for physical health monitoring
4. Making physical health monitoring part of care planning
To tackle the identified issues a local project group was constituted. This group was made up of a consultant psychiatrist, business manager, nurse consultant, team manager, an occupational therapist (OT), a support worker (STR), a pharmacist, social services, public health leads, wellbeing nurses, a service user representative, and a locally based General Practitioner. The group had monthly meetings.
Patients in the Warrington and Halton Early Intervention in Psychosis Service were screened using the 5 Boroughs Partnership (5BP) Comprehensive Physical Health Assessment tool. This tool covered the 6 parameters targeted in the audit and other relevant health information such as, smoking, diet, exercise, sexual health, sleep, dental and optical health, ECGs, and other routine bloods checks. An in-house database in which results could be recorded was devised and implemented. A notification list which alerted on computer when a screening is due was developed; a GP DVD and information leaflet for the GP website and the Clinical Commissioning Group (CCG) Newsletter were produced. Wellbeing Nurse-led clinics were held in Halton and a STR-led physical health clinic was initiated in Warrington. Access into the path labs for both localities was established to help facilitate prompt access to blood results. Regular AQuA meetings took place in Salford, Manchester, and links were established with the Medical Director and the Clinical Commissioning Group, who were regularly, provided progress reports.
The re-audit in May 2013 showed an increase in the number of service users being screened and monitoring for the six identified parameters (see Table 8).A robust and comprehensive recording system has been developed, resulting in more service users receiving appropriate screening and physical health monitoring. Better links and working relationships have been established with primary care services and there is increased awareness of the need for physical health monitoring in professionals and service users. Regular and well-equipped physical health clinics with well-trained staff have been established across both localities. Other secondary care agencies within the Trust are now more aware of the requirements for physical health screenings.
Why should we be doing regular physical health monitoring? The benefits of monitoring the physical health of individuals with serious mental illness cannot be overemphasised; it allows early identification and subsequent management of cardiovascular and other risk factors in a timely manner.11 The Maudsley Guidelines recommend monitoring of blood lipids at baseline, at 3 months and yearly. Similar recommendations are made for the weight, which includes BMI and waist size when possible. Plasma glucose measurements are recommended at baseline, at 4 to 6 months and yearly. Blood pressure measurements are recommended at baseline and frequently during dose titration. Full blood count and electrolyte measurements are recommended at baseline and yearly.12 In the last few years, agencies worldwide have also developed clinical guidelines. In the United States, the American Diabetes Association, American Psychiatric Association, American Association of Clinical Endocrinologist and the North American Association for the Study of Obesity have released joint guidelines.13
Even though the side effects of antipsychotics are well established, many mental health services today have yet to adopt a practice of regular blood monitoring as recommended by international guidelines.14 The issue of responsibility for monitoring metabolic abnormalities remains a much debated topic today.9 The primary responsibility for managing the physical health of individuals with severe mental illness has been said to lie with primary care.7 Another side of the debate, however, exists, and two consensus conferences have called on mental health care providers to take responsibility for the physical health of their patients.8 It is widely recognized that mental health teams have a role to play in the monitoring of the physical health of their service users; however, many psychiatrists still consider psychiatric symptom control as their primary responsibility.14 15 Studies have also shown that Individuals with Serious Mental Illness do not readily access primary care.16 Despite the availability of Clinical Guidelines, screening for and monitoring of metabolic problems in patients with serious mental illness remains suboptimal.11
The usual practice in most centers for monitoring physical health parameters and guidelines used vary and are rarely regulated. Local resource availability is likely to play a significant role in guideline selection. Physical equipment, staffing levels and other resource issues may need to be taken into consideration prior to devising a local guideline. Development of a specialised phlebotomy service, for example, to the outpatient clinics will be a welcome addition, introduction of a key worker system as seen in the Warrington and Halton Early Intervention in Psychosis Team and consideration of the physical health needs of patients as part of the key worker’s duties, a simple one-page monitoring prompt attached to the patient’s medical file, educational intervention and oversight by the senior clinicians may all increase the adherence to routine blood testing guidelines. Regular liaison with General Practitioners regarding a joint approach to physical health monitoring would also help improve adherence to the guidelines.
What is medical pain? One answer would be a poorly defined concept which suffers the ignominy of poor management.
A quick internet search for the term brings up several hits to clinics offering the services of medical practitioners with pain specialty training. Definitions of ‘medical pain’ however, as opposed to those of its more easily construed post-surgical cousin, are both sparse and elusive in the learned literature. One potential candidate is provided by the International Association for the Study of Pain (IASP), whose professional presence on the web offers both a respectable description of pain syndromes of medical aetiologies as well as a taxonomical guide thereto1.
With a struggle to even define the concept, is it any wonder that medical patients with pain complaints continue to score reprehensible figures on studies into pain incidence and effective relief? This is far from a new phenomenon, with the British Journal of Anaesthesia (BJA) reporting a staggering 52% of medical inpatients in one study (N=1594) of a UK district general hospital to be in pain on the medical ward, with 20% and 12% of those in pain rating this complaint as severe and unbearable respectively2. What is particularly distressing about these statistics is the fact that data collection in the same study occurred over five days; more than ample time for complaints to be reported or recognised and appropriate relief strategies implemented. Barriers clearly exist to the provision of adequate medical pain relief, with practice shown to fall below standards recommended by the Royal College of Anaesthetists.3 A sketchy definition is perhaps one such barrier, but what other challenges exist to management of medical pain?
Predictability & On-Call Skills
In contrast to the anticipated pain following an elective surgical procedure, medical pain is less predictable in onset and consequently more the realm of an on-call physician than a specialist pain management team. One unambiguous fact when equating specialist pain rounds and the on-call services of a more junior recruit is that the former clearly benefit from greater levels of experience, even allowing for acquisition of specialist training. The latter inevitably rely more heavily on the knowledge base afforded them by theoretical education, which sadly tends to be rather scant in undergraduate medical programmes.
The lack of early teaching of junior staff on the subject represents one barrier to pain management in general, with formal teaching on the subject of medical pain management a particular shortcoming in several international medical curricula. This fact is supported by the findings of a cross-sectional study in one Sydney hospital utilising a multinational population of medical interns and residents4, indicating some 56.2% of responders felt education on pain management to be inadequate. Up to 68.8% of responders were willing to receive additional lectures on opiate use to increase their knowledge base in this regard, suggesting a definite dearth of dedicated teaching.
In recognition of similar sentiments, a dedicated junior doctor-targeted postgraduate pain curriculum was suggested in 2011 by the Faculty of Pain Medicine (FPM) of the Australia & New Zealand College of Anaesthetists (ANZCA)5. This not only recognises the need for effective pain management skills at an early career stage, but also proposes a core set of competencies and assessments thereof for application to early postgraduate physicians’ skill sets.
A Surgical Predilection?
Skills of junior on-call medics aside, the provision of committed specialist pain services undoubtedly represents one of the major advancements in acute pain patient care. And yet, the needs of medical patients have often been overlooked in favour of acute surgical pain relief, and presumably continue to be so in the face of a lack of convincing evidence to the contrary. One study published in 2008 reporting data from over 220 United Kingdom National Health Service (NHS) hospitals revealed a paltry 16% incidence of routine acute pain service in medical wards6. The same study revealed that 82.2% of clinical leads in acute pain services actually recognise this problem of inadequate pain control on medical wards. With this stark admission from front line algologists in mind, why do elderly and general medical patients consistently appear to produce disconcertingly poor results in pain studies?
Perhaps the lack of adequate medical pain services in the light of a frank admission to a predilection for surgical patients reflects inadequate training, staffing or application of resources as a barrier to effective management of medical pain.
Community Confounders
Limitations of secondary care pain services aside, the primary care setting also exhibits a confounding factor for professional provision of medical pain management – the propensity for patients to easily self-medicate their complaints with non-prescription remedies. The immemorial complaint of headache in the community provides a convenient example of the potential for patients to self-manage their pain symptom. In doing so however, they simultaneously skirt the legion of adverse drug reactions, drug interactions and other implications including paradoxical rebound pain which may complicate management later on in the professional setting. Data published following a recent review of literature sources7 indicate codeine-based compound analgesics to be the most popular over-the-counter medications dispensed across several international populations. This telling fact may be suggestive of a trend in non-professional pain management which impedes effective management according to professional standards. Assuming a relative deficit of surgical to medical pain patients in the community, this may represent a unique challenge to providers of medical pain services.
Chronicity
One further important consideration to be made in medical pain is its potential for chronicity, with prevalence of leading pain disorders including lower back pain and chronic migraine indicated at 10.2%8 and 1-3%9 respectively in recent studies. The former in particular has exhibited an explosive trend in prevalence over recent years, with a more than 2.5-fold increase since 1992 in relative prevalence observed in one 2006 American study of households state-wide (N=5357)8.
Implicit in the chronicity of pain complaints exist a number of secondary disorders which can prove troublesome for effective engagement of pain management services. The European Journal of Pain quotes a large transnational study of chronic pain patients (N=46,394)10 as finding 21% of patients to have been diagnosed with depression because of their pain. Interestingly while almost half of subjects were self-administering over-the-counter analgesics and only 2% were being seen by a pain specialist, an astonishing 40% reported inadequate pain relief –an almost anticipated outcome of the ‘do it yourself’ approach to pain management in chronic, refractory cases? This may be less relevant in surgical pain experiences which intuitively represent a more acute event in a more controlled environment, and therefore may be more amenable to effective management than a drawn out pain experience over several years!
Fear of Pain
Chronicity of pain in turn evokes a largely self-explanatory phenomenon known as fear of pain, which can present a potentially sizeable obstacle to management of patients. High levels of fear of pain and also movement as a provocative agent thereof have been described in 38.6% of fibromyalgia syndrome patients (N=233)11, with this heightened fear of a painful experience linked to increased disability, depressed mood and most importantly pain severity. This latter component alludes to one of the more insurmountable barriers to management of chronic medical pain – the impasse resulting from a vicious circle of pain, fear and infinite vice versas.
The fear of pain may in turn be compounded by a fear of narcotic analgesic therapy on both the part of the patient and the prescribing physician, with this being an issue in non-cancer pain as well as malignant disease. The fear of commencing and continuing long term opiates is traditionally said to be particularly prevalent in the primary care setting12. Fear can arise in view of a number of reasons, including the potential for addiction and major side effects as well as the notion that opiate drugs represent a terminal stage in a disease process. Mention of opiates has been linked to accusations of ‘hidden diagnoses’ on the part of the physician, where patients suspect malignant pathology has been concealed from them by their care provider out of a deep-rooted belief that opiate analgesia is merited solely by cancerous conditions13. Whether this signifies an already fragile patient-doctor relationship or a contribution to the deterioration thereof, the connotation for effective management of medical pain remains a significant one. Repeated careful review of patients on long term opiate therapy for chronic non-cancer pain must be emphasised however, with up to 19% of chronic pain patients found to have some form of addictive disorder in a 2001 paper on the subject14 courtesy of the BJA.
Conclusion
In summary, patients requiring relief of medical pain issues are clearly disadvantaged by the presence of numerous hurdles to effective management of their complaints. The literature base in this regard is conspicuous by its absence, with practices in medical pain management being poorly evidence-based as a result. This represents a major potential target for investigative studies and research into potential trends and best practices. Exploration of effective methods for implementation of improved education for newer staff and also resource allocation for more experienced practitioners would also be of benefit to the standard of care in medical pain.
A 41-year-old woman with a 6-year history of mild psoriasis presented with a rash under her breasts. The differential diagnosis included flexural psoriasis, an allergy to the nickel in her under wired bra, and intertriginous dermatitis (moisture-associated skin damage). She was prescribed Trimovate cream (GlaxoSmith Kline) and developed a florid weeping eczema within 48 hours of application (Figure 1). The eczema settled with the withdrawal of Trimovate and application of Betnovate RD cream (GlaxoSmith Kline). The history was very suggestive of a contact dermatitis to Trimovate cream.
Figure 1 showing eczema
She was referred to the Dermatology department and was patch tested to the European standard, medicament and steroid batteries. She had a number of positives including cetearyl alcohol, sodium metabisulphite, and clobetasone butyrate. These are all components of Trimovate. She was given advice sheets on all her allergens and on avoiding them she has had no recurrence of eczema.
Discussion
Contact dermatitis is a type IV allergy and usually appears within 2 to 3 days after contact with an external allergen. This case is likely to be an example of concomitant sensitisation, where one sensitivity facilitates the acquisition of another sensitivity to a chemically unrelated ingredient within a product.Whilst there has been a previous case report of concomitant sensitivity to sodium metabisulphite and clobetasone butyrate in a patient using Trimovate cream,1 this is the first report of a patient reacting to three of the ingredients found in Trimovate - sodium metabisulphite, clobetasone butyrate, and cetearyl alcohol. Allergy to clobetasone butyrate is rare, with only 5 previously reported cases.1, 2, 3 Allergy to sodium metabisulphite is not uncommon, producing a positive reaction in approximately 4% of patients who are patch tested.4, 5 Allergy to cetearyl alcohol is also rare, with one study estimating the incidence of positive reactions to be 0.8% among 3062 patients that were patch tested.6
Detection of the allergen, or allergens, is important, as avoidance results in resolution of the eczema. Our patient highlights the fact that it is insufficient to simply advise a patient to avoid the topical medicament that has caused a reaction. Ideally, patients with a topical medicament allergy should be patch tested to identify which components the patient is allergic to, so that they can be avoided in all products. In this case, in addition to Trimovate, there are a number of other products that our patient will now avoid. This is of particular significance in view of her history of psoriasis, for which she has used moisturizers and topical steroid preparations in the past, and will likely need again in the future. Clobetasone butyrate is often used in facial and flexural psoriasis. Cetearyl alcohol is a particularly important allergen to identify, as it is found in many products including a number of commonly used moisturizers such as Diprobase (MSD), Cetraben (Genus) and Epaderm (Mölnlycke) cream, and most steroid creams although not steroid ointments. Our patient was therefore advised to use only steroid ointments and has had no recurrence of the contact dermatitis. To conclude, GPs should consider sending their patients with contact dermatitis for patch testing, as the identification of all allergens is valuable to management.
Blue discolouration of the skin can have a multitude of causes, including Mongolian spots, blue naevi, the naevi of Ito and Ota and metallic discolouration1 or the use of drugs such as minocycline. Here we report the case of a 61 year old gentleman who developed blue macular skin lesions that were not attributable to any obvious cause and may be the result of an unidentified occupational exposure.
A 61 year old Caucasian gentleman developed blue macular skin lesions over a 14 year period. The very first lesion appeared in the middle phalanx of the right middle finger (figure 1). It was light blue and pinpoint, eventually darkening and increasing in size to approximately 1mm x 1mm, at which point becoming permanent and non-evolving. The lesion had no notable associated features and the patient was in otherwise good health.
Figure 1: The first blue macular lesion on the middle phalanx of the right middle finger.
Figure 2: A blue macular lesion on the terminal phalanx of the left middle finger.
Figure 3: A lesion on the anterior abdomen from which a punch biopsy was taken.
Figure 4: Haematoxylin and Eosin stained slide at 10x magnification. Abdominal skin biopsy showing dermal interstitial and perivascular distribution of black coloured pigment deposits
At present, he has approximately thirteen blue macular lesions in total, all of which have developed in the same manner. They are distributed predominantly on his hands with one on his left forearm and one on the right abdominal flank. New spots still continue to arise on his hands (figure 2).
A punch biopsy of the abdominal lesion (figure 3) was carried out. The histological findings were those of skin with normal intact epidermis and the presence of black coloured pigment granular deposits, located largely within the papillary dermis and occasional smaller deposits in the superficial reticular dermis (figure 4). The deep reticular dermis and subcutaneous fat were normal. The pigment had a perivascular distribution and in dendritic histiocytic cells, with close association to fibroblasts. Histiocytic cells form part of the mononuclear phagocyte system and these cells are abducted mainly for phagocytosis removal or storing material2. Apart from the pigment, the remaining skin was normal. The use of light microscopy alone does not identify all substances on examination of a Haematoxylin and Eosin (H&E) stained section of tissue. Applying polarisation light microscopy enables the identification of numerous structures, for example crystals, pigments, bone and amyloid3. However, the black coloured material here was non polarisable (no refractile foreign material could be identified). These appearances as seen on light microscopy alone are most frequently seen where there is a history of tattoo artistry, but tattoo pigment is typically identified as showing reflective properties using polarisation4. Interestingly, the patient had no clinical history of deliberate tattooing and other causes were considered.
Discussion
The discovery of black coloured deposits in the dermis excludes the diagnoses of Mongolian spots or blue naevi and the naevi of Ito and Ota, all of which are disorders of dermal melanocytes. Another important differential is malignant melanoma, but it is not the diagnosis as the histopathological findings did not find any evidence of dysplasia or malignancy.
In a disorder known as anthracosis, similar findings of black coloured deposits can be seen in other organs such as within the lung and draining lymph nodes. It is often found in smokers and urban populations and reflects the deposition of carbon which is the most commonly identified exogenous mineral substance within tissue sections. The skin is not a site where such carbon pigment is typically seen and therefore, this is not a credible diagnosis in this case.
Agyria is a condition that occurs as a result of silver particle impregnation of skin leading to blue-grey skin discolouration. Silver exposure may be due to occupational or surgical exposure (by use of silver sutures) or medication with silver salts. On interview, the patient denied any occupational exposure to silver and the use of silver salts. Although the patient had had previous shoulder surgery, silver sutures are no longer used in modern day surgical practice and therefore this cannot be the cause of his skin discolouration.
Unfortunately, histological examination of paraffin embedded tissue sections can only confirm the presence and distribution of an exogenous substance and it is not possible to precisely differentiate the exact type of material which is present. The use of an electron probe micro analyser may have been useful in identifying the substance, however, such equipment is not currently available and was not used in this case.
Interestingly, in tattoo artistry, carbon black may be used to give blue tattoos their colour5 and this is also a component of tyres and industrial rubber products6. This provided us with a link to occupational exposure, given that this gentleman is a tyre worker and has been involved in both the manufacture and assembly of tyres for 34 years. Carbon can cause discoloration of the skin, depending on the extent of deposition.
It is notable that in his 34 years of working with tyres, this gentleman did not routinely use gloves or protective uniform until only 10 years ago. This was as workplace safety precautions were not as strongly enforced in previous times. He admitted to have been in direct contact with the materials involved in tyre building and also suffered accidental superficial cuts on his hands whilst working, which may be a route by which carbon may have been introduced into the dermis. This is supported by the observation that the majority of the blue macular lesions were on the hands. Adding credibility to this theory is the identification of a colleague of this gentleman’s (who did not wish to be identified), whose job also involved the manufacture and assembly of tyres, who also has a similar single blue macular lesion on his hand.
In addition to this we have identified a forum on the internet7 that reports other similar cases of blue pin-point macular lesions appearing on the skin of tyre factory workers – some of whom worked for the same tyre company that this gentleman did. This may suggest that there is an association between exposure to a chemical, possibly carbon black, involved in the manufacture of tyres, and the appearance of these blue macular lesions.
In this case report, the identity of the material deposited and the route by which it accumulated in the dermis is unclear, but may have been related to an occupational exposure – this was in keeping with the general consensus upon presentation of this case at the West Midlands Dermatology Conference at New Cross Hospital Wolverhampton. We welcome any new case reports or literature that may be able to shed further light on this subject.
A 38 year male presented to our centre with a two-month history of jaundice. Past medical & family history was insignificant. Clinical examination revealed a icterus and greenish brown ring in both eyes (Figure 1). Laboratory investigations revealed a mild thrombocytopenia (platelet count-1.2 lakh/mm3) and a prolonged prothrombin time. Liver function tests showed elevated serum levels of alanine aminotransferase & aspartate aminotransferase. Serology for hepatotropic viruses was negative. Serum ceruloplasmin was 9.8 mg/dl (reference range 20-60mg/dl) and 24 hour urinary copper was elevated.
Figure 1
What is the eye finding?
Arcus Senilis
Pterygium
Kayser Fleischer ring
Phlycten
Correct Answer:
3. Kayser Fleischer ring
Discussion:
Wilson’s disease is a consequence of defective biliary excretion of copper. This leads to its accumulation in the liver and brain 1. It is due to mutations of the ATP7B gene on chromosome 13, which codes for a membrane-bound copper transporting ATPase 2.
Kayser-Fleischer ring is an outcome of abnormal copper deposition in the membrane in the limbus of cornea. Slit-lamp examination by an experienced observer is required to identify a K-F ring. The colour may range from greenish gold to brown. When well developed, a K-F ring may be readily visible to the naked eye. K-F ring is observed in most individuals with symptomatic Wilson disease and are almost invariably present in those with neurologic manifestations. They are not entirely specific for Wilson’s disease, since they may also be found in patients with chronic cholestatic diseases.
Clinical presentation is variable and patients presenting with chronic hepatitis, cirrhosis & at times acute liver cell failure. The most common presenting neurologic feature is asymmetric tremor. The characteristic tremor is coarse, irregular proximal tremulousness with a “wing beating” appearance.
Typically, the combination of K-F rings and a low serum ceruloplasmin (<0.1 g/L) level is sufficient to establish a diagnosis of Wilson’s disease 3.However delayed diagnosis in patients with neuropsychiatric presentations is frequent and was in one case as long as 12 years 4.Our patient was treated with a none copper diet, oral zinc and d pencillamine. His liver functions became normal over 6 months of treatment and without progression of liver disease.
Arcus Senilis is a grey band of apacity near the sclero-corneal margin, commonly found in the elderly and associated with hypercholesterolemia. Pterygium is a benign wedge shaped fibrovascular growth of conjunctiva that extends onto the cornea. Phylecten is consequence of allergic response of the conjunctive & corneal epithelium usually associated with tuberculosis, staphylococcus protein and moraxella.
BJMP June 2016 Volume 9 Number 2
BJMP June 2016 Volume 9 Number 2
Research Articles
Review Articles
Case Reports/Series
Clinical Practice
Education and Training
Miscellaneous