Monday, January 27, 2020

Reasoning in Artificial Intelligence (AI): A Review

Reasoning in Artificial Intelligence (AI): A Review 1: Introduction Artificial Intelligence (AI) is one of the developing areas in computer science that aims to design and develop intelligent machines that can demonstrate higher level of resilience to complex decision-making environments (Là ³pez, 2005[1]). The computations that at any time make it possible to assist users to perceive, reason, and act forms the basis for effective Artificial Intelligence (National Research Council Staff, 1997[2]) in any given computational device (e.g. computers, robotics etc.,). This makes it clear that the AI in a given environment can be accomplished only through the simulation of the real-world scenarios into logical cases with associated reasoning in order to enable the computational device to deliver the appropriate decision for the given state of the environment (Là ³pez, 2005). This makes it clear that reasoning is one of the key elements that contribute to the collection of computations for AI. It is also interesting to note that the effectiveness of the r easoning in the world of AI has a significant level of bearing on the ability of the machine to interpret and react to the environmental status or the problem it is facing (Ruiz et al, 2005[3]). In this report a critical review on the application of reasoning as a component for effective AI is presented to the reader. The report first presents a critical overview on the concept of reasoning and its application in the Artificial Intelligence programming for the design and development of intelligent computational devices. This is followed by critical review of selected research material on the chosen topic before presenting an overview on the topic including progress made to date, key problems faced and future direction. 2: Reasoning in Artificial Intelligence 2.1: About Reasoning Reasoning is deemed as the key logical element that provides the ability for human interaction in a given social environment as argued by Sincà ¡k et al (2004)[4]. The key aspect associated with reasoning is the fact that the perception of a given individual is based on the reasons derived from the facts that relative to the environment as interpreted by the individual involved. This makes it clear that in a computational environment involving electronic devices or machines, the ability of the machine to deliver a given reason depends on the extent to which the social environment is quantified as logical conclusions with the help of a reason or combination of reasons as argued by Sincà ¡k et al (2004). The major aspect associated with reasoning is that in case of human reasoning the reasoning is accompanied with introspection which allows the individual to interpret the reason through self-observation and reporting of consciousness. This naturally provides the ability to develop the resilience to exceptional situations in the social environment thus providing a non-feeble minded human to react in one way or other to a given situation that is unique in its nature in the given environment. It is also critical to appreciate the fact that the reasoning in the mathematical perspective mainly corresponds to the extent to which a given environmental status can be interpreted using probability in order to help predict the reaction or consequence in any given situation through a sequence of actions as argued by Sincà ¡k et al (2004). The aforementioned corresponds with the case of uncertainty in the environment that challenges the normal reasoning approach to derive a specific conclusion or decision by the individual involved. The introspective nature developed in humans and some animals provides the ability to cope with the uncertainty in the environment. This adaptive nature of the non-feeble minded human is the key ingredient that provides the ability to interpret the reasons to a given situation as opposed to merely following the logical path that results through the reasoning process. The reasoning in case of AI which aims to develop the aforementioned in the electronic devices to perform complex tasks with minimal human intervention is presented in the next section. 2.2: Reasoning in Artificial Intelligence Reasoning is deemed to be one of the key components to enable effective artificial programs in order to tackle complex decision-making problems using machines as argued by Sincà ¡k et al (2004). This is naturally because of the fact that the logical path followed by a program to derive a specific decision is mainly dependant on the ability of the program to handle exceptions in the process of delivering the decision. This naturally makes it clear that the effective use of the logical reasoning to define the past, present and future states of the given problem alongside the plausible exception handlers is the basis for successfully delivering the decision for a given problem in chosen environment. The key areas of challenge in the case of reasoning are discussed below (National Research Council Staff, 1997). Adaptive Software – This is the area of computer programming under Artificial Intelligence that faces the major challenge of enabling the effective decision-making by machines. The key aspect associated with the adaptive software development is the need for effective identification of the various exceptions and the ability to enable dynamic exception handling based on a set of generic rules as argued by Yuen et al (2002)[5]. The concept of fuzzy matching and de-duplication that are popular in case of software tools used for cleansing data cleansing in the business environment follow the above-mentioned concept of adaptive software. This is the case there the ability of the software to decide the best possible outcome for a given situation is programmed using a basic set of directory rules that are further enhanced using references to a variety of combinations that comprise the database of logical combinations for reasons that can be applied to a given situation (Yuen et al, 20 02). The concept of fuzzy matching is also deemed to be a major breakthrough in the implementation of adaptive programming of machines and computing devices in Artificial Intelligence. This is naturally because of the fact that the ability of the program to not only refer to a set of rules and associated reference but also to interpret the combination of reasons derived relative to the given situation prior to arriving on a specific decision. From the aforementioned it is evident that the effective development of adaptive software for an AI device in order to perform effective decision-making in the given environment mainly depends on the extent to which the software is able to interpret the reasons prior to deriving the decision (Yuen et al, 2002). This makes it clear that the adaptive software programming in artificial intelligence is not only deemed as an area of challenge but also the one with extensive scope for development to enable the simulation of complex real-world problem s using Artificial Intelligence. It is also critical to appreciate the fact that the adaptive software programming in the case of Artificial Intelligence is mainly focused on the ability to not only identify and interpret the reasons using a set of rules and combination of outcomes but also to demonstrate a degree of introspection. In other words the adaptive software in case of Artificial Intelligence is expected to enable the device to become a learning machine as opposed to an efficient exception handler as argued by Yuen et al (2002). This further opens room for exploring into knowledge management as part of the AI device to accomplish a certain degree of introspection similar to that of a non-feeble minded human. Speech Synthesis/Recognition – This area of Artificial Intelligence can be deemed to be a derivative of the adaptive software whereby the speech/audio stream captured by the device deciphers the message for performs the appropriate task (Yuen et al, 2002). The speech recognition in the AI field of science poses key issues of matching, reasoning to enable access control/ decision-making and exception handling on top of the traditional issues of noise filtering and isolation of the speaker’s voice for interpretation. The case of speech recognition is where the aforementioned issues are faced whilst in case of speech synthesis using computers, the major issue is the decision-making as the decision through the logical reasoning alone can help produce the appropriate response to be synthesised into speech by the machine. The speech synthesis as opposed to speech recognition depends only on the adaptive nature of the software involved as argued by Yuen et al (2002). This is due to the fact that the reasons derived form the interpretation of the input captured using the decision-making rules and combinations for fuzzy matching form the basis for the actual synthesis of the sentences that comprises the speech. The grammar associated with the sentences so framed and its reproduction depends heavily on the initial decision of the adaptive software using the logical reasons identified for the given environmental situation. Hence the complexity of speech synthesis and recognition poses a great challenge for effective reasoning in Artificial Intelligence. Neural Networks – This is deemed to be yet another key challenge faced by Artificial Intelligence programming using reasoning. This is because of the fact that neural networks aim to implement the local behaviour observed by the human brain as argued by Jones (2008)[6]. The layers of perception and the level of complexity associated through the interaction between different layers of perception alongside decision-making through logical reasoning (Jones, 2008). This makes it clear that the computation of the decision using the neural networks strategy is aimed to solving highly complex problems with a greater level of external influence due to uncertainties that interact with each other or demonstrate a significant level of dependency to one another. This makes it clear that the adaptive software approach to the development of the reasoned decision-making in machines forms the basis for neural networks with a significant level complexity and dependencies involved as argued by r efenrece8. The Single Layer Perceptions (SLP) discussed by Jones (2008) and the representation of Boolean expressions using SLPs further makes it clear that the effective deployment of the neural networks can help simulate complex problems and also provide the ability to develop resilience within the machine. The learning capability and the extent to which the knowledge management can be incorporated as a component in the AI machine can be defined successfully through identification and simulation of the SLPs and their interaction with each other in a given problem environment (Jones, 2008). The case of neural networks also opens the possibility of handling multi-layer perceptions as part of adaptive software programming through independently programming each layer before enabling interaction between the layers as part of the reasoning for the decision-making (Jones, 2008). The key influential element for the aforementioned is the ability of the programmer(s) to identify the key input and output components for generating the reasons to facilitate the decision-making. The backpropagation or backward error propagation algorithm deployed in the neural networks is a salient feature that helps achieve the major aspect of learning from mistakes and errors in a given computer program as argued by Jones (2008). The backpropagation algorithm in the multi-layer networks is one of the major areas where the adaptive capabilities of the AI application program can be strengthened to reflect the real-world problem solving skills of the non-feeble minded human as argued by Jones (2008). From the aforementioned it is clear that the neural networks implementation of AI applications can be achieved to a sustainable level using the backpropagation error correction technique. This self-correcting and learning system using the neural networks approach is one of the major elements that can help implement complex problems’ simulation using AI applications. The case of reasoning discussed earlier in the light of the neural networks proves that the effective use of the layer-based approach to simulate the problems in order to allow for the interaction will help achieve reliable AI application development methodologies. The discussion presented also reveals that reasoning is one of the major elements that can help simulate real-world problems using computers or robotics regardless of the complexity of the problems. 2.3: Issues in the philosophy of Artificial Intelligence The first and foremost issue faces in the case AI implementation of simulating complex problems of the real-world is the need for replication of the real-world environment in the computer/artificial world for the device to compute the reasons and derive upon a decision. This is naturally due to the fact that the simulation process involved in the replication of the environment for the real-world problem cannot always account for exceptions that arise due to unique human behaviour in the interaction process (Jones, 2008). The lack of this facility and the fact that the environment so created cannot alter itself fundamentally apart from being altered due to the change in the state of the entities interacting within the simulated environment makes it a major hurdle for effective AI application development. Apart from the real-world environment replication, the issue faced by the AI programmers is the fact that the reasoning processes and the exhaustiveness of the reasoning is limited to the knowledge/skills of the analysts involved. This makes it clear that the process of reasoning depending upon non-feeble minded human’s response to a given problem in the real-world varies from one individual to another. Hence the reasons that can be simulated into the AI application can only be the fundamental logical reasons and the complex derivation of the reasons’ combination which is dependant on the individual cannot be replicated effectively in a computer as argued by Là ³pez (2005). Finally, the case of reasoning in the world of Artificial Intelligence is expected to provide a mathematical combination to the delivery of the desired results which cannot be accomplished in many cases due to the uniqueness of the decision made by the non-feeble minded individual involved. This poses a great challenge to the successful implementation of AI in computers and robotics especially for complex problems that has various possibilities to choose from as result. 3: Critical Summary of Research 3.1: Paper 1 – Programs with Common Sense by Dr McCarthy The rather ambitious paper presented by Dr McCarthy aims to provide an AI application that can help overcome the issues in speech recognition and logical reasoning that pose significant hurdles to the logical reasoning in AI application development. However, the approach to the delivery of the aforementioned in the form of an advice taker is a rather feeble approach to the AI representation of the solution to a problem of greater magnitude. Even though the paper aims to provide an Artificial Intelligence application for verbal reasoning processes that are simple in nature, the fact that the interpretation of the verbal reasoning in the light of the given problem relative to an environment is not a simple component to be simulated with ease prior to achieving the desired outcome as discussed in section 2. â€Å"One will be able to assume that the advice taker will have available to it a fairly wide class of immediate logical consequences of anything it is told and its previous knowledge†. (Dr McCarthy, Pg 2). This statement by the author in the research paper provides room for the discussion that the advice taker program proposed by Dr McCarthy is aimed to deliver an AI application using knowledge management as a core component for logical reasoning. This is so because of the nature of the statement which implies that the advice taker program will be able to deliver its decision through access to a wide range of immediate logical consequences of anything it is told and its previous knowledge. This makes it clear that the advice taker software program is not a non-viable approach as the knowledge management strategy for logical reasoning is a component under debate as well as development over a wide range of scientific applications related problems simulation using AI. The Two S tage Fuzzy Clustering based on knowledge discovery presented by Qain in Da (2006)[7] is a classical example for the aforementioned. It is also interesting to note that the knowledge management aspect of artificial intelligence programming is mainly dependant on the speed related to the access and processing of the information in order to deliver the appropriate decision relative to the given problem (Yuen et al, 2002). A classical example for the aforementioned would be the use of fuzzy matching for validation or suggestion list generation on Online Transaction Processing Application (OLTP) on a real-time basis. This is the scenario where a portion of the data provided by the user is interpreted using fuzzy matching to arrive upon a set of concrete choices for the user to choose from (Jones, 2008). The process of choosing the appropriate option from the given suggestion list by the individual user is the component that is being replaced using Artificial Intelligence in machines to c hoose the best fit for the given problem. The aforementioned is evident in case of the advice taker software program that aims to provide a solution for responding to verbal reasoning processes of the day-to-day life of a non-feeble minded individual. The author’s objective ‘to make programs that learn from their experience as effectively as humans do’, makes it clear that the knowledge management approach with the ability of the program to utilise a database type storage option to store/access its knowledge and previous experiences as part of the process. This makes it clear that the advice taker software maybe a viable option if the processing speed related to the retrieval and storage of information from a database of such magnitude which will grow in size at an exponential rate is made available for the AI application. The aforementioned approach can be achieved by the use grid computing technology as well as other processing capabilities with the availability of electronic components at affordable prices on the market. The major issue however is the design for such an application and the logical reasoning processes of retrieving such information to arrive at a decision for a given problem. Form the discuss ion presented in section 2 it is evident that the complexity in the level of logical reasoning results in higher level of computation to account for external variants thus providing the decision appropriate to the given problem. This cannot be accomplished without the ability to deliver process through the existing logical reasons from the application’s knowledgebase. Hence the processing speed and efficiency of computation in terms of both the architecture and software capability is a question that must be addressed to implement such a system. Although the advice taker software is viable in a hardware architecture perspective, the hurdle is the software component that must be capable of delivering the abstraction level discussed by the author. This is because, the ability to change the behaviour of the system by merely providing verbal commands from the user which is the main challenge faced by the AI application developers. This is so because of the fact that the effective implementation of the aforementioned can be achieved only with the effective usage of the speech recognition and logical reasoning that is already available to the software for incorporating the new logical reason as an improvement or correction to the existing set-up of the application. This approach is the major hurdle which also poses the challenge of identifying the key speech patterns that are deemed to be such corrective commands over the statements’ classification provided by the user author for providing information to the application. Fr om the above arguments it can be concluded that the author’s statement – â€Å"If one wants a machine to be able to discover an abstraction, it seems most likely that the machine must be able to represent this abstraction in some relative simple way† – is not a task that is easily realisable. It is also necessary to address the issue that the abstractions that can be realised by the user can be realised by an AI application only if the application being used already has a set of reasons or room for learning the reasons from existing reasons prior to decision-making. This process can be accomplished only through complex algorithms as well as error propagation algorithms discussed in section 2.3. This makes it clear that the realization of the advice taker software’s capability to deliver to represent any abstraction in a relative simpler way is far fetched without the appropriate implementation of self-corrective and learning algorithms. The fact th at learning is not only through capturing the previous actions of the application in similar scenarios but also to generate logical reasons based on the new information provided to the application by the users is an aspect of AI application which is still under development but the necessary ingredient for the advice taker software. However, considering the timeline associated with the research presented by Dr McCarthy and the developments till date, one can say that the AI application development has seen higher level of developments to interpret information from the user to provide an appropriate decision using the logical reasoning approach. The author’s argument that for a machine to learn arbitrary behaviour simulating the possible arbitrary behaviours and trying them out is a method that is extensively used in the twenty-first century implementation of the artificial intelligence for computers and robotics. The knowledge developed in the machines programmed using AI is m ainly through the use of the arbitrary behaviours simulated and their results loaded into the machine as logical reasons for the AI application to refer when faced with a given problem. Form the arguments of the author on the five features necessary for an AI application hold viable in the current AI application development environment although the ability of the system to create subroutines which can be included into procedures as units is still a complex task. The magnitude of the processor speed and related requirements on the hardware architecture is the problem faced by the developers as opposed to the actual development of such a system. The author’s statement that ‘In order for a program to be capable of learning something it must first be capable of being told it’ is one of the many components of the AI application development that has seen tremendous development since the dawn of the twenty-first century (Jones, 2008). The multiple layer processing strategy to address complex problems in the real world that have influential variants both within the input provided as well as the output in the current state of AI application development is synonymous to the above statement by Dr McCarthy. The neural networks for adaptive behaviour presented in great detail by Pfeifer and Scheier (2001)[8] further justifies the aforementioned. This also opens room for discussion on the extent to which the advice taker application can learn from experience through the use of neural networks as an adaptive behaviour component for programming robots and other devices facing complex real-world problems. This is the kind of adaptive behaviour that is represented by the advice taker application by Dr McCarthy who described it nearly half a century ago. The viability of using neural networks to take comments in the form of sentences (imperative or declarative) is plausible with the use of the adaptive behaviour strategy described above using neural networks. Finally, the construction of the advice taker described by the author can be met with in the current AI application development environment although the viability of the same would have been an enormous challenge at the time when the paper was published. The advice taker construction in the twenty-first century AI environment can be accomplished using either a combination of computers and robotics or one of the two as a sole operating environment. So development of the AI application either using computers or robotics for the delivery of the advice taker is plausible depending upon the delivery scope for the application and its operational environment. Some of the hurdles faced however would be with the speech recognition and the ability to distinguish imperative sentences to declarative sentences. The second issue faced in the case of the advice taker will be the scope of application as the simulation of various instances for generating the knowledge database is plausible only withi n the defined scope of the application’s target environment as opposed to the non-feeble human mind that can interact with multiple environments at ease. The multiple layer neural networks approach may help tackle the problem only to a certain level as the ability to distinguish between different environments when formed as layers is not easily plausible without the knowledge on its interpretation stored within the system. Finally, a self-corrective system for AI application is plausible in the twenty-first century but the self learning system using the logical reasons provided is still scarce and requires a greater level of design resilience to account for input and output variants of the system. The stimulus-response forms described by the author in the paper is realisable using the multiple layer neural networks implementation with the limitation on the scope of the advice taker restricted to a specific problem or set of problems. The adaptive behaviour simulated using the neural networks mentioned earlier justifies the ability to achieve the aforementioned. 3.2: Paper 2 – A Logic for Default Reasoning Default reasoning in the twenty-first century AI applications is one of the major elements that attribute to the effective functioning of the systems without terminating unexpectedly unable to handle the exception raised due to the combination of the logic as argued by Pfeifer and Scheier (2001). This is naturally because of the fact that the effective use of the default reasoning process in the current AI application development environment aims to provide default reasoning when an exhaustive list of the reasons that are simulated and rules combinations are effectively managed. However, the definition of exhaustive or the perception of an exhaustive list for the development in a given environment is limited to the number of simulations that the users can develop at the time of AI application design and the adaptive capabilities of the AI system post implementation (refernece8). This makes it clear that the effective use of the default reasoning in the AI application development can be achieved only through handling a wide variety of exceptional conditions that arise in the normal operating environment for the problem being simulated (Pfeifer and Scheier, 2001). In the light of the above arguments the assertion by the author on the default reasoning as beliefs which may well be modified or rejected by subsequent observations holds true in the current AI development environment. The default reasoning strategy described by the author is deemed to be a critical component in the AI application development mainly because of the fact that the defaulting reasons are not only aimed to prevent unhandled exceptions leading to abnormal termination of the program but also the effective learning from experience strategy implemented within the application. The learn from experience described in the section 2 as well as the discussion presented in section 3.1 reveal that the assignment of a default reason for an adaptive AI application will provide room for identifying the exceptions that occur in the course of solving problems thus capturing new exceptions that can replace the existing default value. Furthermore, the fact that the effective use of the default reasoning strategy in AI applications also limits the learning capabilities of the application in cases where the adaptive behaviour of the system is not effective although preventing abnormal termination of the sys tem using the default reason. The logical representation of the exceptions and defaults and the interpretation used by the author to interpret the phrase ‘in the absence of any information to the contrary’ as ‘consistent to assume’ justifies the aforementioned. It is further evident from the arguments of the author that the default reason creation and its implementation into the neural network as a set of logical reasons are complex than the typical case wise conditional analysis on establishing a given condition holds true to the situation on hand. Another interesting factor to the aforementioned it the fact that the definition of the conditions must incorporate room for partial success owing to the fact that the typical logical approach of success or failure do not always apply to the AI application problems. Hence it is necessary to ensure that the application is capable of accommodating partial success as well as accounting for a concrete number to the given problem in order to gener ate an appropriate decision. The discussion on the non-monotonic character of the application defines the ability to effectively formulate the condition for default reasoning rather than merely defaulting due to the failure of the system to accommodate for the changes in the environment as argued by Pfeifer and Scheier (2001). Carbonell (1980)[9] further argues that the type hierarchies and their influence on the AI system have a significant bearing on the default reasoning strategies defined for a given AI application. This is naturally because of the fact that the introduction of the type hierarchies in the AI application will provide the application to not only interpret the problem against the set of rules and reference data stored as reasons but also assign it within the hierarchy in order to identify the viability of applying a default reason to the given problem. The arguments of Carbonell (1980) on Single-Type and Multi-Type inclusion with either strict or non-strict partiti oning justify the above-mentioned argument. It is further critical to appreciate the fact that the effective implementation of the type hierarchy in a logical reasoning environment will provide the AI application with greater level of granularity to the definition and interpretation of the reasons pertaining to a given problem (Pfeiffer and Scheier, 2001). It is this state of the AI application that can help achieve a significant level of independence and ability to interact effectively in the environment with minimal human intervention. The discussion on the inheritance mechanisms presented by Carbonell (1980) alongside the implementation of the inheritance properties as the basis for the implementation of AI systems in the twenty-first century (Pfeifer and Scheier, 2001) further justify the need for default reasoning as an interactive component as opposed to a problem solving constant to prevent abnorm

Sunday, January 19, 2020

Positivism Vs. Classicism

In this essay, Classical and Positivist theories of criminology will be explored and critically discussed to explore the impacts that they have had on modern day policing, introduction of laws, and police practice. The essay will first look at the history of the Classical Theory looking at Beccaria and Benthams classical school of criminology and its effects in a brief section. Positivist theorists will then be identified and the theory will be discussed, outlining the main thesis and beliefs of both of the theories.How each theory defines a criminal will then be taken into consideration and the relations of theories like the broken windows theory (Wilson and Kelling 82), labelling theory (Becker 1982), strain theory (Merton 1957) and rational choice theory (Homos 1961) will be used throughout the essay to explore the effects that the classical and positivist theories have had on police concepts like public order policing and community policing, touching on criminal justice systems a nd modern day police practice.Classical Criminological though can be traced to the criminal justice system and the penal system. Beccarias 1764 Publication on crime and punishments introduced a serious consideration into the harm caused to society by crime, and ideological outline of the basis for punishments and the relationship between the state and the offender (Beccaria 2003). Beccaria Stated in his approach to the prevention of crime that it is often distilled down to three ideas, and that it is fundamentally a product of Certainty; how likely punishment is to occur.Celerity; How quickly punishment is inflicted. and Severity; how much pain is inflicted (Newburn 2007). Another later criminologist; Jeremy Bentham, then published writings on the penology and notions of â€Å"rational free-willed character of offenders† (Maguire et al 2002) and forwarded the study of crime in that the central concerns of free will and rational choice came together to attempt a more logical a nalysis of crime and suitable punishment.In the twilight years of the 19th century the emergence of the Italian school of criminology sparked a departure in thinking on the study of crime the schools founding member Cesare Lombroso introduced a holy bible of sorts into the criminological world in that he contributed to the introduction of scientific methodology in regard to the study of crime. Lombroso most notably introduced a biological positivism into the study of crime.An â€Å"Atavistic Heredity† (Lombroso 1911) in relation to the cause of offending where physical features were viewed as evidence of an innately criminal nature in a kind of criminal anthropology. His work was then continued and elaborated by two other Italian scholars Ferri (1856-1929) and Garofalo (1852-1934) (Newburn 2007). Ferri and Garofalo elaborated on the environmental factors that can also effect criminal behaviour in relation to positivist criminology. Positivism carries the main assumptions that the methods of the natural sciences should and could be applied to the social world.Suggesting natural sciences should be used as the method base to analyse and conduct research in relation to policing and policing concepts. Positivists believe that research should consist of social knowledge and scientific knowledge through observation and scientific data. Facts must be separated from values and usually, there is a preference for a use of quantitative data over qualitative (adapted from Bottoms 2000, cited in Newburn 2007). in 1913 Positivist theorist Charles Goring published a book called The English Convict.This book logged the study he undertook which took place over 13 years, the study involved examining 3,000 British convicts against a controlled group of non-convict males to try and find out if the criminal could be categorised to a certain type of person, no significant physical differences were found between the two groups. (Goring 1913) Critical of this study and convince d that the criminal is organically inferior (Quoted in Brown et al 2004) Earnest Hooton conducted his own research into the criminal as a certain type of person and introduced Somatyping into positivist criminology.Somatyping involves the belief that evolution was dominated by superior types, arguing that a criminal had a certain type and evolution could eventually eradicate the criminal. Hooton was criticised for having poor data and an unrepresentative control group. However Hooton's work then sparked this idea in the Positivist Criminology theorists as William Sheldon then looked into Somatypes further in 1949 (Newburn 2007) and concluded that there were three types of body a person has; Endomorph, Mesomorph and Ectomorph.These body types were basically short and fat, Large and muscular and Lean and fragile. Sheldon argued that each of these body types was related to particular personality traits and that all individuals possessed varied traits however certain traits were more pr edominant than others. In modern day policing and criminology we use a theory called the labelling theory. This theory was first put forward by Howard Becker in 1963, Becker claimed that criminal elements are associated with physical appearance and the criminal becomes a label attached to a certain type of person.In 2011 it was common belief that a criminal wore a certain type of clothing which was a hooded jacket or ‘Hoodie'. Articles were even published in the newspapers like the Guardian (Guardian 2011) under title â€Å"The power of the Hoodie†. Amplified by the media this piece of clothing became an instant link to criminal behaviour and deviance. Positivist theory can be linked in here with the labelling theory to show the development in the idea of a ‘Criminal type' and show how in modern day policing we are using these theories to determine and define the word criminal.Following the work of Emile Durkheim, Robert K Merton's Strain theory (1957) can also be linked into this concept as the positivist belief is that criminal behaviour can be encouraged by social physical and biological elements, the strain theory thesis is that pressure from social surroundings can encourage an individual to commit crime. If an individual is singled out by Somatype or through labelling theory, they may feel social strain or believe that they should become deviant which could actually pressure said individual into committing criminal acts.An example of where this kind of concept was familiar was when the London riots happened in 2011. Classical criminology however argues against the concept of a criminal being defined by a certain type. Bentham stated that every person has free will and is able to make a rational choice based on the situation they are in at the time and what they feel would be the appropriate action to take. Classicism disagrees with the positivist view of a criminal only being a certain type of person and believes that the criminal deri ves from within any person.Everybody has free will, and the ability to make an informed decision on their actions in any situation they may be in, therefore believing that the criminal is an element every person has the possibility to exploit instead of positivist theory of the criminal element being biologically woven into a persons DNA. Classicism had a major effect on the criminal justice system and penology, punishments were believed to be best given on account of the appropriateness of the crime in question. This idea became the basis for our criminal justice systems today.With the introduction of the classical school of criminology the use of capital punishment and torture was on the decline and in their place the introduction of prison systems as core elements of the justice systems and punishment systems we have today. The abolishment of capital punishment has had an indescribably huge effect on our modern penal systems, the effects are vast however include the introduction of fundamental law like the Human Rights Act (HRA 1998). Acts like this are incredibly important in criminal trials and allow every person to have rights to things like the right to a fair trial and the right to prohibition of torture.Classical criminology influenced these modern day laws as its theorists believed in the concept that the punishment for crime should be based on the scale of what has been done and should be appropriate to the crime itself. Classical Criminology has influenced the constructions of our prison systems as becoming the core element of the way we punish criminals instead of using inhumane methods through capital punishment by considering the scale of the crime and deciding on an appropriate sentence for the criminal.Here another theory can be looked at which has been shaped by the classicism theories and beliefs. This theory is the Rational Choice theory (Homas 1961). the theory is based around the assumptions that criminal activity is committed by an indiv idual after weighed up the risk and reward of an action, if the person believes that the reward is greater than the risk they may be more likely to commit a crime than if the risk was greater than the reward. This theory is supportive of Benthams notes of free will and rational choice.Free will and rational choice can be used to help explain the way we police through public order. In a public order policing situation, like a protest or a riot every person who attends and participates does so out of their own free will, a protester may not riot because they may believe that the risk of being arrested is greater than the reward of violently voicing their opinions. However positivism argues that a criminal is a definitive type of person and can be influenced by social physical or biological surroundings. These assumptions can be seen in the cases of rioting and community crime.The London riots happened in 2011 and they escalated throughout the country with riots happening in places lik e Birmingham, Liverpool and Manchester as well as other locations. The reasons that these sparked off is because of the social influence that was pressuring younger people to join in, here the broken windows theory (Wilson and Kelling 82), labelling theory (Becker 1982), strain theory (Merton 1957) and rational choice theory (Homos 1961) can all be related through classicism and positivist views to our modern day policing methods.Broken windows theory states that a run down or derelict area can encourage crime, this relates to the positivist assumption of criminal behaviour being encouraged by the physical surroundings and the evidence of this happening in the London Riots is when all the shops had been broken into and fires had been started. The streets were wrecked and this would have encouraged acts of violence.Merton's strain theory and Beckers labelling theory are also applicable here as the social strain of most young youths committing the crime would encourage more young peop le to commit crime, because the individuals could see crimes being committed around them without any action being taken, this would have further encouraged deviance as rational choice theory says the risk is lower than reward. These positivist based theories meant police in the London riots and most public order situations would target younger individuals to try and find criminal activity and arrests.The Classicism side of influence on Public Order would then come after the arrest in trial where they would be questioned why they had committed these crimes out of their own free will and then put through the justice system, being sentenced on the classical assumption that the punishment should be appropriate to the crime committed. Positivist assumptions can also be linked into the concept of community policing. Positivists believe that crime and criminal behaviour can be influenced through social and physical surroundings.Wilson and Kelling (1982) also believe this is the case as the ir broken windows theory looks at how the area a person lives in can affect their attitude towards crime and committing crime. Through the Classicism belief of community deterrence police practices have been introduced to arm the police with powers that they can use to their advantage against the war on crime. The Police and Criminal Evidence act (1984) and The Police Reform Act (2002) has seen the introduction of new police powers and a new national policing plan.These police practices include powers like stop and search. Stop and search gives the ability for any police constable to stop any citizen and search them if they believe they have reasonable grounds to do so. Classicism and Positivist theories have also had an effect on the way that we police our communities. PCSOs (police community support officers) were introduced in 2002 under the police reform act (2002) and help to improve community relations with the police.This police practice supports the positivist beliefs that c riminals can be influenced through social and physical surroundings as better relationships are built with the community and things like team projects are created to improve derelict areas and social situations people may find themselves in by offering things like youth clubs and activities. This deters crime by drawing people away from delinquency and encouraging them to take part in constructive, positive activity.Theorist David Matza outlined that the positivist theory drew on three sets of problematic assumptions; the first being Differentiation; the assumption that offenders can be separated from non-offenders by definitive characteristics, the second being Determinism; the assumption that biological, physiological or social factors affect the criminal and criminal behaviour and the third being Pathology; the assumption that an offender is an offender due to something going wrong in their lifetime (Tierney 1996).The problems of these views are that the fail to take into account the aspect of rationality, choice and human decision making. They define a criminal as a certain person, and if a person falls into the category of what has been defined by the positivist theory as a criminal it means that they must carry the traits of a criminal which is simply not true as proven by Charles Gorings work (1913). Classicism theory argues rational choice and free will, however what if a person has the impaired ability to make decisions and acts without being rational.Power and wealth is also a problem with the theory, if the classicism theory applied to all in the same sense then why is it that people who have less power and wealth tend to be the more predominant resident of the criminal justice system and not the wealthy. there are other factors that both these theories have not taken into consideration throughout their thesis, they are also very much at opposite ends of the scale.The positivist theory says that criminals are a type of person and the classicism theo ry says that a criminal offence can be committed by anybody as well all have free will and rational choice. Without the Classical school of Criminology and The positivist theorists vital procedure and acts would not have been put into place that are fundamental today for the way our society and criminal justice system operates. Classicism changed the way we sentence criminals and the construct of our prison systems which are of prestigious importance to the modern justice system.Positivist theory has influenced the way we police in terms of public order and community policing through the introduction of the Human Rights Act (1998), the Police and Criminal Evidence Act (1984) and the Police Reform Act (2002). These acts have allowed the modern day police to be able to take the best assumptions from the classicism theorists and the best assumptions from the positivists and use them to create a criminal justice system that incorporates the best of each theory into the police practices and concepts that are used from day to day in modern day policing.

Saturday, January 11, 2020

Language Learning Strategies

Over the last few decades, ‘college of self-education’ has assumed more importance than the ‘college of education.’ That is to say, a noticeable transformation has taken place, as for language learning. The emphasis is more on learners and learning than teachers and teaching. The system of language education has undergone metamorphic changes. The focus is on the learner. The learner-centered curriculum and the learner-centeredness as for language education are the concepts in practice now. Many papers/articles have appeared emphasizing the above shift. The use of language learning strategies (in second and foreign language (LLS) in second and foreign language (L2/FL) for learning and teaching have become part of the language syllabi.Defining of Language Learning Strategies:â€Å"Weinstein and Mayer (1986) defined learning strategies (LS) broadly as â€Å"behaviors and thoughts that a learner engages in during learning† which are â€Å"intended to in fluence the learner's encoding process† (p. 315) Later Mayer (1988) more specifically defined LS as â€Å"behaviors of a learner that are intended to influence how the learner processes information† (p. 11).Human beings have the innate tendency to process the language and learning which in fact means processing of the information. Learning skills are the inseparable part of the learning process, whatever be the content or context. Learning skills are put to use in all subjects—like Mathematics, History, Geography, Language etc. Learning environment vary, it can be informal as well as classroom setting.As for L2/FL education—it has been defined by Tarone (1983) as â€Å"an attempt to develop linguistic and sociolinguistic competence in the target language — to incorporate these into one's inter language competence† (p. 67). Tarone, E. (1983).The earlier focus was on the linguistic or sociolinguistic competence. It has progressively changed and the current emphasis is on processes and the characteristics of LLS. One point incidentally. LLS are distinct from learning styles. Learning styles mainly concern to innate, inborn and chosen ways of noting, absorbing and processing the acquired information and skills. There exists, however, a distinct relationship between one’s own style of learning the language and the language learning strategies adopted by one.Good language learner/High Proficient students:The ways or learning a language varies from person to person. The choicest way to learn a language can not be singled out. The best way to pick up the language comes from within. You have the intense desire to learn a particular language and therefore you are immersed in the related activities that help the cause. Read books, watch movies, interact with people who speak that language, study the related articles in the magazines. If you cultivate a friend circle in the language of your choice, you pick up the language q uickly. You need not pay intense attention to the grammar at the initial stages. Join a tutored course and own a self-study package.Tutored learning is the commonly accepted mode to learn and acquire skill in a language. The experienced teacher in a classroom, who has handled hundreds of students in the past, knows their initial problems and the related solutions can provide motivation for the language learners. Language learning need not be a serious and tense exercise. If you travel and tour the country of the targeted language, your language related questions and problems get an automatic solution. Over the period, you find that you have picked up the language.Foreign language learning strategies:Research made to find the best method to teach a language is voluminous. The relevant answers to this problem came from the learners themselves. It was found that tested strategies play an effective role in the area of language learning. Of all the methods the ones classified by Oxford ( 1990) provided a system and stability to the whole process. Oxford viewed learning strategies as â€Å"specific actions taken by the learner to make leaning easier, faster, more enjoyable, more self directed, more effective, and more transferable to new situations† (p.8). The strategies are divided in to two categories:Direct Strategies: They are further classified into a) Memory strategies b) Cognitive strategies c) Compensation strategies.Indirect Strategies: These are further classified into a) Metacognitive Studies b) Affective Strategies c) Social Strategies(Oxford, 1990, p 16)Memory strategies are, i) creating mental images, ii) applying images and sounds, iii) reviewing well.Cognitive strategies are,   i) practicing, ii) analyzing and reasoning iii) creating structure for input and output.Compensation strategies are, i) guessing intelligently, ii) overcoming limitations in speaking and writing.As for Indirect Strategies,Metacognitive strategies are, i) centering your learning, ii) arranging and planning your learning iii) evaluating your learning.Affective strategies are, i) lowering your anxiety, ii) encouraging yourself, iii) taking your emotional temperature.Social strategies are, i) asking questions, ii) co-operating with others, iii) empathizing with others. (Oxford, 1990, p 17).Factors affecting the Choice of Learning Strategies:Many factors influence the selection of strategies employed by the students learning a second language. The most important factor is motivation. A highly motivated student is different from the less motivated one. If one has a particular and strong reason for learning the language, one picks up the language fast. Sometimes, career prospectuses are linked to the language. In such cases, one is expected to learn a language within the specified period. Females use such strategy in a greater degree than the male counterparts. Memorization is related to cultural background. Asian students showed higher degree of expert ise in this area. Attitudes and beliefs play the dominant role. The negative attitudes do not help the cause. The positive attitudes have a profound effect. The type of task assists in determining the strategy employed to carry it out. As for the age, the older and more advanced students employ different strategies. Learning style is also one of the important factors in the selection of the strategy. Tolerance of ambiguity is directly related to the selection of the strategy. (Language†¦..)Proficiency and language learning strategies:The number of English language learners is rising steadily. Special interventions for underachieves are therefore necessary. Different approaches are tried for teaching academics to students to whom English is a second language. It is no ordinary task to teach a student in a language in which he has no mastery. Lots of information is now available as for students hailing from different cultural/linguistic backgrounds. Firstly, the traditional peer- assisted Learning Strategies to enhance student efficiency in English are effective. Such a strategy has shown positive results on the reading achievement.Another intervention is Bilingual Cooperative Integrated Reading and Composition program. This was beneficial for the Spanish-speaking students. In this intervention the focus is on writing, reading in both Spanish and English language activities. The students are divided into small co-operative learners groups. Another invention is Instructional Conversations and Literature Logs. The goal here is to enhance the comprehension ability and also English language proficiency. Importance is given to small-group discussions. The teachers   act as facilitators for the group, while the group of students is engaged in telling stories, relate personal experiences which are helpful   in understanding each other, keep topics and concepts, writing independently short notes as per the writing prompts.Answer questions related to stories etc. The exercises have high potential effects on the Language learners and they contribute to fast development of the English language skills. They also help the communication skills. â€Å"The Vocabulary Improvement Program for English Language Learners and Their Classmates (VIP) is a vocabulary development curriculum for English language learners and native English speakers (grades 4-6). The 15-week program includes 30-45 minute whole class and small group activities, which aim to increase students' understanding of target vocabulary words included in a weekly reading assignment.†(What works†¦) Many more such interventions are employed and language learning strategies followed for proficiency in English language.Why are LLS important for L2?â€Å"Within ‘communicative' approaches to language teaching a key goal is for the learner to develop communicative competence in the target L2/FL, and LLS can help students in doing so.† The importance of communication st rategies is an essential factor of strategic competence. Communication skill and language learning strategies differ in substance. The speakers make an intentional and conscious effort to communicate in a L2/FL.All strategies that L2/FL learners utilize in the language which they intend to learn are covered under LLS. LLS are very essential for learning the language because they are the proper tools for self-initiated active involvement, which is necessary for enhancing communicative skills.Conclusion:During the last few decades, many changes have occurred relating to teacher’s professional learning and consequently they have influenced and affected the teaching methods/standards for the students. Computes have influenced the teaching and studying pattern much. One can see effective use of technology in all areas. The pattern of collaborative activity between the teachers and the students has also undergone perceptible changes and such changes are for the better. They have he lped to create drastic level of improvement in the communication, and speaking skills. The teachers understand the needs of the students better. The students understand the expectations of the teachers even better. In this materialistic world and fast moving technological advances, expertise in communication and spoken language is an important aspect for the career growth.References Cited:Weinstein, C., & Mayer, R. (1986). The teaching of learning strategies: In M.C. Wittrock (Ed.), Handbook of Research on Teaching, 3rd Edition (pp. 315-327). New York: MacmillanMayer, R. (1988). Learning strategies: An overview: In Weinstein, C., E. Goetz, & P. Alexander (Eds.), Learning and Study Strategies: Issues in Assessment, Instruction, and Evaluation (pp. 11-22). New York: Academic Press.Oxford, R. (1990). Language learning strategies: What every teacher should know. Boston: Heinle & Heinle.Language Learning Strategies: Article: An Update Oxford (1990a) synthesized existing research on how t he following factors influence the choice of strategies used among students learning a second language. †¦www.cal.org/resources/digest/oxford01.html – 25k -Retrieved on June 16,2007Article: What Works Clearinghouse: English Language Learning Peer-Assisted Learning Strategies is an instructional program for use in †¦ develop reading comprehension ability along with English language proficiency. †¦ies.ed.gov/ncee/projects/wwc/english_language.asp – 25k – Retrieved on June 16,2007Tarone, E. (1983). Some thoughts on the notion of ‘communication strategy'. In C. Faerch & G. Kasper (Eds.), Strategies in Inter language Communication (pp. 61-74). London: Longman.

Thursday, January 2, 2020

The Attack on Pearl Harbor December 7, 1941

On the morning of December 7, 1941, the Japanese launched a surprise air attack on the U.S. Naval Base at Pearl Harbor in Hawaii. After just two hours of bombing  more than 2,400 Americans were dead, 21 ships* had either been sunk or damaged, and more than 188 U.S. aircraft destroyed. The attack at Pearl Harbor so outraged Americans that the U.S. abandoned its policy of isolationism and declared war on Japan the following day—officially bringing the United States into World War II. Why Attack? The Japanese were tired of negotiations with the United States. They wanted to continue their expansion within Asia but the United States had placed an extremely restrictive embargo on Japan in the hopes of curbing Japans aggression. Negotiations to solve their differences had not been going well. Rather than giving in to U.S. demands, the Japanese decided to launch a surprise attack against the United States in an attempt to destroy the United States naval power even before an official announcement of war was given. The Japanese Prepare for Attack The Japanese practiced and prepared carefully for their attack on Pearl Harbor. They knew their plan was extremely risky. The probability of success depended heavily on complete surprise. On November 26, 1941, the Japanese attack force, led by Vice Admiral Chuichi Nagumo, left Etorofu Island in the Kurils (located northeast of Japan) and began its 3,000-mile journey across the Pacific Ocean. Sneaking six aircraft carriers, nine destroyers, two battleships, two heavy cruisers, one light cruiser, and three submarines across the Pacific Ocean was not an easy task. Worried that they might be spotted by another ship, the Japanese attack force continually zig-zagged and avoided major shipping lines. After a week and a half at sea, the attack force made it safely to its destination, about 230 miles north of the Hawaiian island of Oahu. The Attack On the morning of December 7, 1941, the Japanese attack on Pearl Harbor began. At 6:00 a.m., the Japanese aircraft carriers began launching their planes amid the rough sea. In total, 183 Japanese aircraft took to the air as part of the first wave of the attack on Pearl Harbor. At 7:15 a.m., the Japanese aircraft carriers, plagued by even rougher seas, launched 167 additional planes to participate in the second wave of the attack on Pearl Harbor. The first wave of Japanese planes reached the U.S. Naval Station at Pearl Harbor (located on the south side of the Hawaiian island of Oahu) at 7:55 a.m. on December 7, 1941. Just before the first bombs dropped on Pearl Harbor, Commander Mitsuo Fuchida, leader of the air attack, called out, Tora! Tora! Tora! (Tiger! Tiger! Tiger!), a coded message which told the entire Japanese navy that they had caught the Americans totally by surprise. Surprised at Pearl Harbor Sunday mornings were a time of leisure for many U.S. military personnel at Pearl Harbor. Many were either still asleep, in mess halls eating breakfast, or getting ready for church on the morning of December 7, 1941. They were completely unaware that an attack was imminent. Then the explosions started. The loud booms, pillars of smoke, and low-flying enemy aircraft shocked many into the realization that this was not a training exercise; Pearl Harbor was really under attack. Despite the surprise, many acted quickly. Within five minutes of the beginning of the attack, several gunners had reached their anti-aircraft guns and were trying to shoot down the Japanese planes. At 8:00 a.m., Admiral Husband Kimmel, in charge of Pearl Harbor, sent out a hurried dispatch to all in the U.S. naval fleet, AIR RAID ON PEARL HARBOR X THIS IS NOT DRILL. The Attack on Battleship Row The Japanese had been hoping to catch U.S. aircraft carriers at Pearl Harbor, but the aircraft carriers were out to sea that day. The next major important naval target was the battleships. On the morning of December 7, 1941, there were eight U.S. battleships at Pearl Harbor, seven of which were lined up at what was called Battleship Row, and one (the Pennsylvania) was in dry dock for repairs. (The Colorado, the only other battleship of the U.S.s Pacific fleet, was not at Pearl Harbor that day.) Since the Japanese attack was a total surprise, many of the first torpedoes and bombs dropped on the unsuspecting ships hit their targets. The damage done was severe. Although the crews on board each battleship worked feverishly to keep their ship afloat, some were destined to sink. The Seven U.S. Battleships on Battleship Row: Nevada - Just over a half hour after the Nevada was hit by one torpedo, the Nevada got underway and left its berth in Battleship Row to head toward the harbor entrance. The moving ship made an attractive target to the Japanese bombers, who caused enough damage to the Nevada that it was forced to beach itself.Arizona - The Arizona was struck a number of times by bombs. One of these bombs, thought to have hit the forward magazine, caused a massive explosion, which quickly sank the ship. Approximately 1,100 of her crew were killed. A memorial has since been placed over the Arizonas wreckage.Tennessee - The Tennessee was hit by two bombs and was damaged by oil fires after the nearby Arizona exploded. However, it stayed afloat.West Virginia - The West Virginia was hit by up to nine torpedoes and quickly sank.Maryland - The Maryland was hit by two bombs but was not heavily damaged.Oklahoma - The Oklahoma was hit by up to nine torpedoes and then listed so severely that she turned nearly ups ide down. A large number of her crew remained trapped on board; rescue efforts were only able to save 32 of her crew.California - The California was struck by two torpedoes and hit by a bomb. The flooding grew out of control and the California sank three days later. Midget Subs In addition to the air assault on Battleship Row, the Japanese had launched five midget submarines. These midget subs, which were approximately 78 1/2 feet long and 6 feet wide and held only a two-man crew, were to sneak into Pearl Harbor and aid in the attack against the battleships. However, all five of these midget subs were sunk during the attack on Pearl Harbor. The Attack on the Airfields Attacking the U.S. aircraft on Oahu was an essential component of the Japanese attack plan. If the Japanese were successful in destroying a large portion of the U.S. airplanes, then they could proceed unhindered in the skies above Pearl Harbor. Plus, a counter-attack against the Japanese attack force would be much more unlikely. Thus, some of the first wave of Japanese planes were ordered to target the airfields that surrounded Pearl Harbor. As the Japanese planes reached the airfields, they found many of the American fighter planes lined up along the airstrips, wingtip to wingtip, making easy targets. The Japanese strafed and bombed the planes, hangers, and other buildings located near the airfields, including dormitories and mess halls. By the time the U.S. military personnel at the airfields realized what was happening, there was little they could do. The Japanese were extremely successful at destroying most of the U.S. aircraft. A few individuals picked up guns and shot at the invading planes. A handful of U.S. fighter pilots were able to get their planes off the ground, only to find themselves vastly outnumbered in the air. Still, they were able to shoot down a few Japanese planes. The Attack on Pearl Harbor Is Over By 9:45 a.m., just under two hours after the attack had begun, the Japanese planes left Pearl Harbor and headed back to their  aircraft carriers. The attack on Pearl Harbor was over. All Japanese planes had returned to their aircraft carriers by 12:14 p.m. and just an hour later, the Japanese attack force began their long journey homeward. The Damage Done In just under two hours, the Japanese had sunk four U.S. battleships (Arizona, California, Oklahoma,  and  West Virginia). The  Nevada  was beached and the other three battleships at Pearl Harbor received considerable damage. Also damaged were three light cruisers, four destroyers, one minelayer, one target ship, and four auxiliaries. Of the U.S. aircraft, the Japanese managed to destroy 188 and damage an additional 159. The death toll among Americans was quite high. A total of 2,335 servicemen were killed and 1,143 were wounded. Sixty-eight civilians were also killed and 35 were wounded. Nearly half of the servicemen that were killed were on board the  Arizona  when it exploded. All this damage was done by the Japanese, who suffered very few losses themselves -- just 29 aircraft and five midget subs. The United States Enters World War II The news of the attack on Pearl Harbor quickly spread throughout the United States. The public was shocked and outraged. They wanted to strike back. It was time to join World War II. At 12:30 p.m. on the day following the attack on Pearl Harbor,  President Franklin D. Roosevelt  gave an  address to Congress  in which he declared that December 7, 1941, was a date that will live in infamy. At the end of the speech, Roosevelt asked Congress to declare war on Japan. With only one dissenting vote (by  Representative Jeannette Rankin  from Montana), Congress declared war, officially bringing the United States into World War II. * The 21 ships that were either sunk or damaged include: all eight battleships (Arizona, California, Nevada, Oklahoma, West Virginia, Pennsylvania, Maryland,  and  Tennessee), three light cruisers (Helena, Honolulu,  and  Raleigh), three destroyers (Cassin, Downes,  and  Shaw), one target ship (Utah), and four auxiliaries (Curtiss, Sotoyoma, Vestal,  and  Floating Drydock Number 2). The destroyer  Helm, which was damaged but remained operational, is also included in this count.