Sunday, May 24, 2020

All Of Sarah Kane s Cleansed - 2245 Words

Discuss the issues raised and reasoning behind decisions made when attempting to contemporize your own version of Sarah Kane’s play Cleansed for stage. Comment on the professional production and how and why your work may have differed from, or mirrored, Katie Mitchell’s production and any considerations/adjustments you may have made to your performance after viewing Cleansed is a play written by Sarah Kane and was performed at the Royal Court Theatre Downstairs on 30th April 1998. It centres on a seemingly psychotic character named Tinker and the experiments he carries out on the other characters; throughout the play he repeatedly tests them on their love for each other. Kane uses this to explore a range of issues, including gender identity, grief and narcotics addiction. Cleansed has a number of key themes that continuously surface throughout the text, such as love, sex, human nature and torture and suffering. These subjects are most likely the reason that Cleansed was categorised In-Yer-Face theatre . In fact, all of Sarah Kane s plays were classed as work that fell under the In-Yer-Face Theatre genre. This theatre style came about in the mid-90s and had a very short lifespan; it existed as a pre-9/11 genre. Since the events of 9/11, like the world we live in, theatre has adapted into something new. Texts that are classified as In-Yer-Face Theatre have one defining feature: they push boundaries and shock the audience. Looking at the ideas thatShow MoreRelatedManaging Information Technology (7th Edition)239873 Words   |  960 PagesPrentice Hall, One Lake Street, Upper Saddle River, New Jersey 07458. All rights reserved. Manufactured in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use material from this work, please submit a written request to

Wednesday, May 13, 2020

Negligence and duty of care - Free Essay Example

Sample details Pages: 5 Words: 1416 Downloads: 2 Date added: 2017/06/26 Category Law Essay Type Essay any type Tags: Duty Essay Did you like this example? Duty of care. Duty of care is the first element of negligence and therefore, in order to discuss further on duty of care, one would have to first define the tort of negligence. In Blyth v Birmingham Waterworks Co,[1] the courts defined negligence as an omission of something which a reasonable man would do and the doing of an act which a reasonable man would not do. Don’t waste time! Our writers will create an original "Negligence and duty of care" essay for you Create order In Heaven v Pender,[2] the courts held that the presumption of duty of care arises when one person is placed in a position with regard to another person or property, it is in ordinary sense that if he does not use reasonable ordinary care in his conducts, he would cause danger or injury towards the other person or property. Therefore, ordinary care is required to prevent the occurrence of such danger. In Stovin v Wise,[3] the courts explained that generally there is no duty to rescue a stranger from danger. The duty mentioned above is regarding duty that is imposed by law or in other words, it is a legal duty. Test to determine the standard of duty of care. There are a few test that is used in determining the existence of duty of care. The primary test is the neighbour principle established in the well-known case of Donoghue v Stevenson.[4] In this case, Lord Atkin laid down that the rule that you are required to love your neighbours becomes a law by itself and it requires one to take a reasonable care to prevent any acts or omissions that can be reasonably foreseen to be likely to cause injury to your neighbour. The question posed to this principle is regarding who is oneà ¢Ã¢â€š ¬Ã¢â€ž ¢s neighbour in law. The courts held that neighbour in law is someone who is directly affected by oneà ¢Ã¢â€š ¬Ã¢â€ž ¢s act or omission. It is a reasonable manà ¢Ã¢â€š ¬Ã¢â€ž ¢s test whereby the courts would have to determine whether a reasonable man would foresee that his conduct would affect the plaintiff adversely. If the answer to this hypothetical question is yes, then the plaintiff is considered to be his neighbour and he owes a duty of care to the neighbour.[5] It is essential to note here that the neighbour principle requires the defendant to be a foreseeable victim and thus, in order for the defendant to be a foreseeable victim, there has to be a close proximity. Therefore, the neighbour principle requires the plaintiff to be of a close proximity with the d efendant. The plaintiff would not be a foreseeable victim if there is no proximity between the plaintiff and defendant. In the case of Home Office v Dorset Yacht Co Ltd,[6] the courts held that the principle laid down in Donoghue v Stevenson should be regarded as a milestone in determining whether there exist a duty of care. This principle significantly assist the development of the law of negligence. Prior to the case of Donoghue v Stevenson, there was vagueness in the law regarding civil liability for carelessness.[7] In an 1889 textbook, there was a list containing fifty-six various duties of care.[8] Therefore, the judgment in Donoghue v Stevenson brought an end to the chaotic situation and had introduced the law of negligence as a separate civil wrong. The next test used by the courts to determine whether duty of care is established is the Anns test laid down by the courts in Anns v Merton London Borough Council.[9] This is a two-stage approach laid down by Lord Wilberfor ce whereby the first is to determine whether there is a relationship of proximity between the alleged tortfeasor and the person who had suffered the loss. If it foreseeable that the carelessness of the tortfeasor would lead the other party to suffer damage, then a duty of care would on prima facie be established. The second stage of this test requires the court to take into account any considerations that may negate the said duty or to reduce and limit the scope or group of persons that the duty will be imposed upon. This two-stage approach in essence is to determine whether it is reasonable to foresee that the defendantà ¢Ã¢â€š ¬Ã¢â€ž ¢s act or omissions will cause any damage to the plaintiff. If it is reasonable to foresee that the defendantà ¢Ã¢â€š ¬Ã¢â€ž ¢s act would cause harm to the plaintiff, then there exist a presumption of duty of care.[10] This test receive heavy criticism in Governors of the Peabody Donation Fund v Sir Lindsay Parkinson Co Ltd.[11] The courts in th is case held that the neighbour principle laid down by Lord Atkin should be proved before the duty of care is presumed to exist but the scope of the duty depends on the facts of the case. The courts should consider whether the duty of care imposed on the defendant is just and reasonable. In Curran v Northern Ireland Co-ownership Housing Association Ltd,[12] the learned judge, Lord Keith, held that the Anns test has been given more importance than it should have been given and held that the test need not be applicable in future cases in establishing the duty of care. The third test used in determining the duty of care is the Caparo test which is derived from Caparo Industries plc v Dickman.[13] In this case, there were three factors that is needed to be fulfilled to establish duty of care. The first is the courts must determine whether the damage caused is reasonably foreseen, the second is whether there is any policy to negate the duty of care and the third is whether it is just and reasonable. If this requirements are fulfilled, then duty of care is established.[14] It is important to note here that all three elements under the Caparo test needs to be fulfilled in order for duty of care to be established. Development in Malaysia. In Malaysia, the courts have used all of the above test. However, the test that is currently used by the courts is the three stage test which is the Caparo test. This can be seen in the case of Majlis Perbandaraan Ampang Jaya v Stephen Phoa Cheng Loon Ors.[15] In this case, the Federal Court had referred to the Caparoà ¢Ã¢â€š ¬Ã¢â€ž ¢s case do determine whether duty of care exist. The issue that arises in this principle is whether this principle only applies to economic loss or it may extend to all situations. The courts used the foreseeability test and held that this test applies to all situation. The courts only had to determine whether the duty of care which is imposed upon the defendant is just and reasonable. The cour ts went on stating that it would be rare for the outcome of the test to be not just and reasonable. This test is used in a more recent Malaysian case which is Projek Lebuh Raya Utara-Selatan Sdn Bhd v Kim Seng Enterprise (Kedah) Sdn Bhd.[16] In this case, the courts reiterated that the standard of care to determine negligence is that of the reasonable man and it is an objective test. Another recent case is the case of Jordan Saw Yu Huan v Low Suan Chuan Ors.[17] In this case, the high court applied the Caparoà ¢Ã¢â€š ¬Ã¢â€ž ¢s test and the courts were of the view that it was just and reasonable to impose such duty of care upon the defendants and held that the defendants in this case had breached such duty of care. Therefore, it is clear that the recent development in Malaysia regarding the standard of care required to establish duty of care is more inclined towards the three-stage approach which is commonly known as the Caparoà ¢Ã¢â€š ¬Ã¢â€ž ¢s test. The courts in Malaysia had followed the Caparoà ¢Ã¢â€š ¬Ã¢â€ž ¢s test because this test requires that the damage caused to the plaintiff to be reasonably foreseen by the defendant. The defendant would not owe a duty of care if he cannot reasonably foresee the damage. Therefore, this test more straight forward as compared to the other test laid down earlier. [1](1856) 11 Ex 781 at 784. [2](1883) 11 QBD 503 at 507. [3][1996] AC 923 at 930-931. [4][1932] AC 562 at 580 (HL). [5]Norchaya Talib, Law of Torts in Malaysia (3rd edn, Sweet Maxwell Asia 2011) 98. [6][1970] AC 1027. [7]Dato Mohd Hishamudin Yunus, à ¢Ã¢â€š ¬Ã‹Å"JUDICIAL ACTIVISM à ¢Ã¢â€š ¬Ã¢â‚¬  THE WAY TO GO?à ¢Ã¢â€š ¬Ã¢â€ž ¢ [2012] 6 MLJ xvii. [8]Thomas Beven, à ¢Ã¢â€š ¬Ã‹Å"Principles of the law of negligenceà ¢Ã¢â€š ¬Ã¢â€ž ¢ (1889). [9][1978] AC 728. [10]Norchaya Talib, Law of Torts in Malaysia (3rd edn, Sweet Maxwell Asia 2011) 100. [11][1984] 3 All ER 529 (HL). [12][1987] 2 All ER 13, 710. [13][1990] 1 All ER 568 (HL). [14]Norchaya Talib, Law of Torts in Malaysia (3rd edn, Sweet Maxwell Asia 2011) 106. [15][2006] 2 MLJ 389 (FC). [16][2013] 5 MLJ 360 (CA). [17][2013] 4 MLJ 137.

Wednesday, May 6, 2020

Data Compression and Decompression Algorithms Free Essays

string(80) " a list of all the alphabet symbols in descending order of their probabilities\." Table of Contents Introduction†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. We will write a custom essay sample on Data Compression and Decompression Algorithms or any similar topic only for you Order Now †¦2 1. Data Compression†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦2 1. 1Classification of Compression†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦ 2 1. 2 Data Compression methods†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦3 2. Lossless Compression Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 4 2. 1 Run-Length Encoding†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚ ¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦4 2. 1. 1 Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 5 2. 1. 2Complexity †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 5 2. 1. 3 Advantages and disadvantage†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦6 3. Huffmann coding†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦6 3. 1 Huffmann encoding†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 6 3. 2 Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 7 4. Lempel-Ziv algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â ‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦7 4. 1 Lempel-Ziv78†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 8 4. 2Encoding Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 8 4. 3 Decoding Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 12 5. Lempel-Ziv Welch†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â ‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 14 5. 1 Encoding Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 14 5. 2 Decoding Algorithm†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 6 References†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 17 INTRODUCTION: Data compression is a common requirement for most of the computerized applications. There are number of data co mpression algorithms, which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms, which use different approaches. This paper examines lossless data compression algorithms. 1. DATA COMPRESSION: In computer science  data compression  involves  encoding  information  using fewer  bits  than the original representation. Compression is useful because it helps reduce the consumption of resources such as data space or transmission  capacity. Because compressed data must be decompressed to be used, this extra processing imposes computational or other costs through decompression. 1. 1 Classification of Compression: a) Static/non-adaptive compression. b) Dynamic/adaptive compressioin. a) Static/Non-adaptive Compression: A  static  method is one in which the mapping from the set of messages to the set of codewords is fixed before transmission begins, so that a given message is represented by the same codeword every time it appears in the message ensemble. The classic static defined-word scheme is Huffman coding. b) Dynamic/adaptive compression: A code is  dynamic  if the mapping from the set of messages to the set of codewords changes over time. 2. 2 Data Compression Methods: 1) Losseless Compression: Lossless compression reduces bits by identifying and eliminating  statistical redundancy. No information is lost in Lossless compression is possible because most real-world data has statistical redundancy. For example, an image may have areas of colour that do not change over several pixels; instead of coding â€Å"red pixel, red pixel, †¦ the data may be encoded as â€Å"279 red pixels†. Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data could be deleterious. Typical examples are executable programs, text documents, and source code. Some image file formats, like  PNG  or  GIF, use only lossless compressi on 2) Loosy Compression: In information technology, lossy compression is a data encoding method that compresses data by discarding (losing) some of it. The procedure aims to inimize the amount of data that needs to be held, handled, and/or transmitted by a computer. Lossy compression is most commonly used to compress multimedia data (audio, video, and still images), especially in applications such as streaming media and internet telephony. If we take a photo of a sunset over the sea, for example there are going to be groups of pixels with the same colour value, which can be reduced. Lossy algorithms tend to be more complex, as a result they achieve better results for bitmaps and can accommodate for the lose of data. The compressed file is an estimation of the original data. One of the disadvantages of lossy compression is that if the compressed file keeps being compressed, then the quality will degraded drastically. 2. Lossless Compression Algorithms: 2. 1 Run-Length Encoding(RLE): RLE stands for Run Length Encoding. It is a lossless algorithm that only offers decent compression ratios in specific types of data. How RLE works: RLE is probably the easiest compression algorithm. It replaces sequences of the same data values within a file by a count number and a single value. Suppose the following string of data (17 bytes) has to be compressed: ABBBBBBBBBCDEEEEF Using RLE compression, the compressed file takes up 10 bytes and could look like this: A 8B C D 4E F 2. 1. 1 Algorithm: for (i=0;ilength;i++) { J-0; Count[i]=1; do { J++; If (str[i+j]==str[i]) { Count[i]++; }while(str[i+j]==str[i]) If count[i]==1; Coutstr[i++] Else{Cout count[i]str[i]; } } } Also you can see, RLE encoding is only effective if there are sequences of 4 or more repeating characters because three characters are used to conduct RLE so coding two repeating characters would even lead to an increase in file size. It is important to know that there are many different run-length encoding schemes. The above example has just been used to demonstrate the basic principle of RLE encoding. Sometimes the implementation of RLE is adapted to the type of data that are being compressed. 2. 1. 2 Complexity and Data Compression: We’re used to talk about complexity of an algorithm measuring time and we usually try to find the fastest implementation, like in search algorithms. Here it is not so important to compress data quickly, but to compress as much as possible so the output is as small as possible without lossing data. A great feature of run-length encoding is that this algorithm is easy to implement. 2. 1. 3 Advantages and disadvantages: This algorithm is very easy to implement and does not require much CPU horsepower. RLE compression is only efficient with files that contain lots of repetitive data. These can be text files if they contain lots of spaces for indenting but line-art images that contain large white or black areas are far more suitable. Computer generated colour images (e. g. architectural drawings) can also give fair compression ratios. Where is RLE compression used? RLE compression can be used in the following file formats: PDF files 3. HUFFMANN CODING: Huffman coding is a popular method for compressing data with variable-length codes. Given a set of data symbols (an alphabet) and their frequencies of occurrence (or, equivalently, their probabilities), the method constructs a set of variable-length codewords with the shortest average length and assigns them to the symbols. Huffman c oding serves as the basis for several applications implemented on popular platforms. Some programs use just the Huffman method, while others use it as one step in a multistep compression process. 3. 1 Huffman Encoding: The Huffman encoding algorithm starts by constructing a list of all the alphabet symbols in descending order of their probabilities. You read "Data Compression and Decompression Algorithms" in category "Essay examples" It then constructs, from the bottom up, a binary tree with a symbol at every leaf. This is done in steps, where at each step two symbols with the smallest probabilities are selected, added to the top of the partial tree, deleted from the list, and replaced with an auxiliary symbol representing the two original symbols. When the list is reduced to just one auxiliary symbol (representing the entire alphabet), the tree is complete. The tree is then traversed to determine the codewords of the symbols. . 2 Algorithm: Huffmann(A) { n=|A|; Q=A; For(i=1 to n-1) { z=new node; Left[z]=Extract_min(Q); Right[z]=Extract_min(Q); f[z]=f[left[z]]+f[right[z]]; insert(Q,z); } return Extract_min(Q); //return root } 4. The Lempel-ziv Algorithms: The Lempel Ziv Algorithm is an algorithm fo r lossless data compression. It is not a single algorithm, but a whole family of algorithms, stemming from the two algorithms proposed by Jacob Ziv and Abraham Lempel in their landmark papers in 1977 and 1978. Lempel Ziv algorithms are widely used in compression utilities such as gzip, GIF image compression. Following are the variants of Lempel-ziv algos; LZ77Variants| LZR| LZSS| LZB| LZH| LZ78variants| LZW| LZC| LZT| LZMW| 4. 1 Lempel-ziv78: The LZ78 is a dictionary-based compression algorithm. The codewords output by the algorithm consist of two elements: an index referring to the longest matching dictionary entry and the first non-matching symbol. In addition to outputting the codeword for storage/transmission, the algorithm also adds the index and symbol pair to the dictionary. When a symbol that not yet in the dictionary is encountered, the codeword has the index value 0 and it is added to the dictionary as well. With this method, the algorithm gradually builds up a dictionary. 4. 2 Algorithm: Dictionary empty ; Prefix empty ; DictionaryIndex 1; while(characterStream is not empty) { Char char next character in characterStream; if(Prefix + Char exists in the Dictionary) Prefix Prefix + Char ; else { if(Prefix is empty) CodeWordForPrefix 0 ; else CodeWordForPrefix DictionaryIndex for Prefix ; Output: (CodeWordForPrefix, Char) ; insertInDictionary( ( DictionaryIndex , Prefix + Char) ); DictionaryIndex++ ; Prefix empty ; } } Example 1: LZ78 Compression: Encode (i. e. compress) the string ABBCBCABABCAABCAAB using the LZ78 algorithm. Compressed message: The compressed message is: (0,A)(0,B)(2,C)(3,A)(2,A)(4,A)(6,B) Note: The above is just a representation, the commas and parentheses are not transmitted. Steps : 1. A is not in the Dictionary; insert it 2. B is not in the Dictionary; insert it 3. B is in the Dictionary. BC is not in the Dictionary; insert it. 4. B is in the Dictionary. BC is in the D ictionary. BCA is not in the Dictionary; insert it. 5. B is in the Dictionary. BA is not in the Dictionary; insert it. 6. B is in the Dictionary. BC is in the Dictionary. BCA is in the Dictionary. BCAA is not in the Dictionary; insert it. 7. B is in the Dictionary. BC is in the Dictionary. BCA is in the Dictionary. BCAA is in the Dictionary. BCAAB is not in the Dictionary; insert it. LZ78 Compression :No of bits transmitted: Uncompressed String: ABBCBCABABCAABCAAB Number of bits = Total number of characters * 8 = 18 * 8 = 144 bits Suppose the codewords are indexed starting from 1: Compressed string( codewords): (0, A) (0, B) (2, C) (3, A) (2, A) (4, A) (6, B) Codeword index 1 2 3 4 5 6 7 Each code word consists of an integer and a character: The character is represented by 8 bits. The number of bits n required to represent the integer part of the codeword with index i is given by: Codeword (0, A) (0, B) (2, C) (3, A) (2, A) (4, A) (6, B) index 1 2 3 4 5 6 7 Bits: (1 + 8) + (1 + 8) + (2 + 8) + (2 + 8) + (3 + 8) + (3 + 8) + (3 + 8) = 71 bits The actual compressed message is: 0A0B10C11A010A100A110B 4. 3 Decompression Algorithm: Dictionary empty ; DictionaryIndex 1 ; hile(there are more (CodeWord, Char) pairs in codestream){ CodeWord next CodeWord in codestream ; char character corresponding to CodeWord ; (codeWord = = 0) String empty ; else String string at index CodeWord in Dictionary ; Output: String + Char ; insertInDictionary( (DictionaryIndex , String + Char) ) ; DictionaryIndex++; } Example : LZ78 Decompression Decompressed message: The decompressed message is: ABBCBCABABCAABCAAB 5. Lempel-ziv Welch: This improved version of the original LZ78 algorithm is perhaps the most famous modification and is sometimes even mistakenly referred to as the Lempel Ziv algorithm. Published by Terry Welch in 1984it basically applies the LZSS principle of not explicitly transmitting the next nonmatching symbol to the LZ78 algorithm. The only remaining output of this improved algorithm are fixed-length references to the dictionary (indexes). If the message to be encoded consists of only one character, LZW outputs the code for this character; otherwise it inserts two- or multi-character, overlapping,distinct patterns of the message to be encoded in a Dictionary. Overlapping: The last character of a pattern is the first character of the next pattern. 5. 1 Algorithm: Initialize Dictionary with 256 single character strings and their corresponding ASCII codes; Prefix first input character; CodeWord 256; while(not end of character stream){ Char next input character; if(Prefix + Char exists in the Dictionary) Prefix Prefix + Char; else{ Output: the code for Prefix; insertInDictionary( (CodeWord , Prefix + Char) ) ; CodeWord++; Prefix Char; } } Output: the code for Prefix; Example : Compression using LZW Encode the string BABAABAAA by the LZW encoding algorithm. 1. BA is not in the Dictionary; insert BA, output the code for its prefix: code(B) 2. AB is not in the Dictionary; insert AB, output the code for its prefix: code(A) 3. BA is in the Dictionary. BAA is not in Dictionary; insert BAA, output the code for its prefix: code(BA) 4. AB is in the Dictionary. ABA is not in the Dictionary; insert ABA, output the code for its prefix: code(AB) 5. AA is not in the Dictionary; insert AA, output the code for its prefix: code(A) 6. AA is in the Dictionary and it is the last pattern; output its code: code(AA) Compressed message: The compressed message is: 666525625765260 LZW: Number of bits transmitted Example: Uncompressed String: aaabbbbbbaabaaba Number of bits = Total number of characters * 8 = 16 * 8 = 128 bits Compressed string (codewords): 9725698258259257261 Number of bits = Total Number of codewords * 12 = 7 * 12 = 84 bits Note: Each codeword is 12 bits because the minimum Dictionary size is taken as 4096, and 212 = 4096 5. 2 Decoding algorithm: Initialize Dictionary with 256 ASCII codes and corresponding single character strings as their translations; PreviousCodeWord first input code; Output: string(PreviousCodeWord) ; Char character(first input code); CodeWord 256; while(not end of code stream){ CurrentCodeWord next input code ; if(CurrentCodeWord exists in the Dictionary) String string(CurrentCodeWord) ; else String string(PreviousCodeWord) + Char ; Output: String; Char first character of String ; insertInDictionary( (CodeWord , string(PreviousCodeWord) + Char ) ); PreviousCodeWord CurrentCodeWord ; CodeWord++ ; } Summary of LZW decoding algorithm: output: string(first CodeWord); while(there are more CodeWords){ if(CurrentCodeWord is in the Dictionary) output: string(CurrentCodeWord); else utput: PreviousOutput + PreviousOutput first character; insert in the Dictionary: PreviousOutput + CurrentOutput first character; } Example : LZW Decompression Use LZW to decompress the output sequence 66 65 256 257 65 260 1. 66 is in Dictionary; output string(66) i. e. B 2. 65 is in Dictionary; output string(65) i. e. A, insert BA 3. 256 is in Dictionary; output string(256) i. e. BA, insert AB 4. 257 is in Dic tionary; output string(257) i. e. AB, insert BAA 5. 65 is in Dictionary; output string(65) i. e. A, insert ABA 6. 60 is not in Dictionary; output previous output + previous output first character: AA, insert AA References: * http://www. sqa. org. uk/e-learning/BitVect01CD/page_86. htm * http://www. gukewen. sdu. edu. cn/panrj/courses/mm08. pdf * http://www. cs. cmu. edu/~guyb/realworld/compression. pdf * http://www. stoimen. com/blog/2012/01/09/computer-algorithms-data-compression-with-run-length-encoding/ * http://www. ics. uci. edu/~dan/pubs/DC-Sec1. html#Sec_1 * http://www. prepressure. com/library/compression_algorithms/flatedeflate * http://en. wikipedia. org/wiki/Data_compression How to cite Data Compression and Decompression Algorithms, Essay examples

Monday, May 4, 2020

Electronic Health Record Systems

Question: Discuss about the Electronic Health Record Systems and Standard Vocabularies. Answer: Introduction The introduction and adoption of information technology have led to the development of electronic health records that have vastly improved the administration of medical care in the recent years making it crucial in the medical world. Electronic health records could be described in non-technical language as an electronic rendition of a patients restorative history. However, its success envisioned by its incorporation of standardized vocabularies, that increased interdepartmental communication while offering medical care to the patient. The Standard medical vocabularies, wordings, or coding frameworks are organized rundown of terms, which together with their definitions intended to portray the healthcare administration of patients unambiguously. These vocabularies cover diseases, prescription of drugs and medication and so on. They are utilized to bolster recording and divulge a patient's care at different levels of detail, through the electronic health records (EHRS) Tastan, (2014) Rationale of standard vocabularies in electronic health records (EHRS) Electronic health records systems are the following stride in the progress of medicinal services that can fortify the connection amongst patients and medical practitioners. The information, readiness and accessibility of it will empower these practitioners to settle on better choices and give better care. Moorhead,(2014). Like in such a scenario where data in numerous areas is to forage, shared, and incorporated whenever required there will require normal vocabularies for the individual, clinical, and general health data allowing the individual to be satisfied with the information received. Other reasons for incorporating standardized terminologies include: 1.Accessing and acquiring of coded information utilizing numerous properties and at various levels of specificity than initially coded 2.Provide medical practitioners with decision support while offering treatment, e.g., drug prescription increasing shared understanding throughout the field Reviewing the nature of administration and benchmarking while Supporting examination exercises 1.Organizing information section with adaptability of expressionfledgling to expertise Examples of vocabularies or terminologies: 1.ABC Codes 2.Clinical Care Classification (CCC), 3.International Classification of Nursing Practice (ICNP), 4.Logical Observation Identifiers Names and Codes (LOINC), 5.NANDA International 6.Nursing Minimum Data Set (NMDS), 7.Nursing Outcomes Classification (NOC), 8.Systematic Nomenclature of Medicine Clinical Terms (SNOMED CT), Issues associated with EHRs design and solutions. A few frameworks that focus on implementing fine-grained essential clinical information have been restrictive, constrained, troublesome for clinicians to utilize and come around leading to low client acknowledgment. This issue is addressed by creating training programs for the medical practitioners where they can be taught how to use the systems. Additionally, research could be undertaken based on questionnaires on how the medical practitioners would want the systems tailored to their advantages Nelson and Staggers, (2017). Semantic EHRs underpinning-The incorporation of vocabularies or terminologies in most EHRs lack a semantic underpinning which could lead to misdiagnoses or confusion. This problem could be solved by updating systems and adding description logic encoded systems e.g. SNOMED CT.' Existing medicinal vocabularies change in their scope and fulfillment. This issue could be solved by undertaking and investing in medical research Complete clinical phrasing frameworks are expected to assist incorporate patient information with EHRs. By use of SNOMED CT expects to help structure and modernize the medicinal record yet should be utilized effectively and reliably to protect information quality and boost shareability. Ajami, and Bagheri-Tadi,(2013). Lack of a single standard and comprehensive vocabularies which would improve the flow of medical information which could prove fruitful later .this could be addressed by increased research and invest Paganin and Rabelo ,(2013). Reference Tastan, S., Linch, G. C., Keenan, G. M., Stifter, J., McKinney, D., Fahey, L., ... Wilkie, D. J. (2014). Evidence for the existing American Nurses Association-recognized standardized nursing terminologies: A systematic review.International journal of nursing studies,51(8), 1160-1170. Nelson, R., Staggers, N. (2017).Health informatics: An interprofessional approach. Elsevier Health Sciences. seng, H., Moorhead, S. (2014, July). The use of standardized terminology to represent nursing knowledge: nursing interventions relevant to safety for patients with cancer. InNursing Informatics(Vol. 201, pp. 298-333). Paganin, A., Rabelo, E. R. (2013). Clinical validation of the nursing diagnoses of impaired tissue integrity and impaired skin integrity in patients subjected to cardiac catheterization.Journal of advanced nursing,69(6), 1338-1345. Ajami, S., Bagheri-Tadi, T. (2013). Barriers for adopting electronic health records (EHRs) by physicians.Acta Informatica Medica,21(2), 129.