Global Marketing Standardisation – Dissertation Sample

Global marketing standardisation vs adaptation

The world of business and marketing gets more competitive by the day (Crews, C. and Thierer, A. (2003) 272-285).  Processes of change such as globalisation and technological advancements pit actors in the business world against each other even more confrontationally (Crews, C. and Thierer, A. (2003) 272-285), as time passes.

Various marketing strategies and mechanisms have evolved as a consequence (Crews, C. and Thierer, A. (2003) 272-285). The actors who employ such methods often hope that the competitive edge may be gained through these means. Among these methods are branding, standardisation and adaptation strategies. These have been employed to directly influence the purchasing behaviour of prospective customers and of customers. There is no question that all of these strategies have significant impact upon the customer mindset and the ultimate strategic and business environment (Crews, C. and Thierer, A. (2003) 272-285), and there is also no question, that without an understanding of these techniques, what they are and how they operate, any business actor who wishes to be successful will be at an acute disadvantage (Crews, C. and Thierer, A. (2003) 272-285). Markets and economies have been heavily influenced through branding and other business optimisation techniques (Crews, C. and Thierer, A. (2003) 272-285) (Cronin, A. (2000) 1-10).

Markets and the environment of business has also been heavily influenced and affected through processes which are termed corporate social responsibility ideas. These have invested the ethos of ethics at the core of many corporations and marketing strategies. They have proven both beneficial and counterproductive to corporate actors and we shall see these arguments made throughout the thesis. It is immutable that the course of business involves the solicitation of custom, and the satisfaction of the most essential gatekeeper of success, the satisfied customer. Yet, this ultimate objective of business and corporate actors is not easily gained.

The customer has changed in response to the changing world. The task for the corporate actor is often merely catch-up, sensitivity and understanding. These seem to be everyday characteristics, however they have proven elusive to many and their absence may spell catastrophic for even the well-resourced businesses. However, there remains a heated debate over which marketing strategies to employ and when, and what the strengths and weaknesses of each approach is. There is also an unresolved debate over which interpretations of how each method should be implemented should be regarded as the most beneficial. This thesis aims to investigate all of the above themes and issues, how they may be considered as abstractions, how they relate to one another, how they affect one another and how they may be integrated within successful marketing campaigns, within the context of global marketing strategies.

Aims of the Thesis

The ultimate aims of the thesis are to achieve a better understanding of the theoretical frameworks which govern the domain of business strategy. This will be considered alongside the field of marketing strategies. Various theoretical terms such as standardisation, adaptation will be looked at and defined, as part of the research. The process of globalisation and how it relates to the other theoretical concepts will also be considered as part of the thesis. These concepts will all be evaluated individually and in the context of global marketing strategies. 

Using these concepts as a springboard, the thesis will ultimately aim to focus on particular international marketing issues such as branding, how best to market brands, the various trajectories of purchasing behaviour and customer perspective and how these concepts coalesce within appropriate marketing strategies. These concepts will be considered in terms of how they may assist with international market entry, market research and market mix decisions. The impacts of culture and the significance of culture and marketing communications is another important theme which will be considered both directly and indirectly as an underpinning of the thesis.

The measurement of success in the contemporary business environment is increasingly being denominated in global financial performance of companies. For this reason the thesis will involve six case studies of companies who employ various marketing and business strategies. The case studies will be evaluative and will be integrated throughout the thesis as past of the discussion about how each method of marketing strategy identified may be evaluated. Each case study will involve a multi national, internationally recognised corporate actor, for whom the replication of successful business and marketing strategies and models can yield heavy gains, or equally heavy losses.

The stakes are high for these types of companies and they typically go to huge lengths to ensure the success of marketing campaigns, and this is why these types of companies will be focused on as part of the research. In general terms, and by means of an introduction, the concept of a business strategy, the concept of a brand and how these concepts relate to one another will be given some consideration. These concepts will be discussed within the particular context of marketing strategy, and the global marketing environment will be assessed in these theoretical terms. The next section will explain the methodological background to the thesis.

Methodology

The methodology which will be used to achieve the aims laid out in the introduction will be primarily qualitative in nature. The qualitative paradigm of research argues that research may be carried out using the researcher as a means of research, through which to gather data which is subjectively formulated and controlled (Darlington, Y. and Scott, D. (2002) 122-125). Conversely, quantitative research refers to methods of data collection which involve stricture and objectivity (Darlington, Y. and Scott, D. (2002) 122-125). The quantitative paradigm of research is therefore more rigid and ordered than its qualitative counterpart (Darlington, Y. and Scott, D. (2002) 122-125). This however, does not imply an absence of scientific veracity within the qualitative arm of research; it merely alludes to a variation in the processes which are used in data collection (Darlington, Y. and Scott, D. (2002) 122-125). The collection of qualitative data therefore may be influenced by subjective viewpoint (Darlington, Y. and Scott, D. (2002) 122-125), and thus the outcome of the research is always more open to question on the grounds of subjectivity (Darlington, Y. and Scott, D. (2002) 122-125). 

Quantitative data, on the other hand seeks to exclude the possibility of error (Darlington, Y. and Scott, D. (2002) 122-125), and sees the introduction of subjectivity, at the research stage as a flaw in research design which detracts from the possibility of obtaining reliable and uniform results (Darlington, Y. and Scott, D. (2002) 122-125). In overall terms therefore the qualitative paradigm of research is more akin to an exercise of trail and error where concepts and attitudes are probed and a final conclusion drawn as a product of the researcher’s own opinion (Darlington, Y. and Scott, D. (2002) 122-125). In overall terms the quantitative paradigm of research seeks to hypothesis from the beginning of the investigation and then endeavours to prove or disprove the hypothesis using objective scientific means (Darlington, Y. and Scott, D. (2002) 122-125).

Case studies of six different companies will be looked at in an effort to apply the different theoretical frameworks which will be looked at throughout the thesis. These six companies are McDonald’s, Virgin, Aviva, EMI Records, Dell and Orange. The financial performance of these companies will be analysed qualitatively throughout the thesis and this will be accompanied with a detailed analysis of the strategies employed by each company and whether they are more aligned to the standardisation approach or the adaptation approach to business strategy. Therefore theoretical frameworks identified as part of the thesis will be contextualised with contemporary financial data.

Case studies will be used to gather information and assimilate all of the theoretical underpinning of the research within a contemporary business framework. Case studies have been chosen as it is felt that the ultimate aims of the thesis may be met through the focus on small-scale highly specific qualitative sources of information. The subjects of the case studies have been derived from the international business corporate environment. This focus was chosen as the writer felt that it would be more advantageous to select these sorts of organisations over smaller global companies, as it is arguable that more attention is given to business modelling within these organisations.

How Theoretical Frameworks can assist with international market entry, market research and marketing mix decisions.

What is a Business/Marketing Strategy? 

A business and marketing strategy involves devising a systematic way of optimising the sales of a particular product and or service (Crews, C. and Thierer, A. (2003) 272-285). This involves the planning and research of how best to introduce a particular product and or service within a given context such as a country or a market. Strategy involves consideration of factors such as the quality, price and durability of a given product and or service and the identification of market cleavages containing those who may be the most interested or susceptible to targeted marketing strategies. As an abstract concept strategy also involves recognition of what the weaknesses of a business might be. Factors such as an ineffective method of communicating with customers, an ineffectual record of retaining customers, bad or ineffectual public relations, bad or ineffectual marketing relations, bad press, low staff morale, poor organisational capacity can represent weaknesses in these terms, and the list is far from exhaustive. 

Devising a business strategy involves the compilation of detailed data about markets, customers, customer and prospective customer preferences and compiling a detailed plan as to how best to integrate both sets of data within a plan aimed at maximising sales, customer satisfaction, customer loyalty, amongst other factors and ultimately augmenting the profits of the company (Barrett, B. et al. (1998) 15 and 147).

Devising a business strategy also involves understanding the mindset of customers and prospective customers, so an understanding of human behaviours and how this impacts purchasing behaviour must form part of any effective marketing or business strategy (Barrett, B. et al. (1998) 15 and 147). The factors which may be seen to be very relevant within this context are human perceptions of beauty, youth, status, quality, good service, reliability and durability (Barrett, B. et al. (1998) 15 and 147). It may be argued that the adage ‘the customer is always right’ is appropriately married to the notion of an effective marketing or business strategy, since it is the perception which counts, not the ethics of the perception, its political correctness, its nobility or its veracity (Barrett, B. et al. (1998) 15 and 147). 

Marketing and business strategies increasingly seek to align the strategic aims of a company, the aims in terms of profitability and functionality with the aims of specific marketing campaigns. Marketing strategies are often considered by groups of project managers who have a semi-autonomous role within the company and for whom, it is necessary to liaise with senior management about the ultimate resources and man-power which will be devoted to the business and marketing objectives. The marketing strategy is a complex phenomenon. It is difficult to define because it is difficult to replicate, or indeed encapsulate.

Rather marketing strategies are often conceived on a pragmatic basis, with the actors involved gaining perspective from others in the field, similarly piloted projects and how these have or have not been successful. The concept of a business strategy is very much shaped by the actors who are involved in the process. Conversely, as we shall see discussed in the thesis a marketing strategy may be conceived in a more prescriptive fashion. Actors may seek to follow a plan of action which has been used for example within another country and hope to reproduce the same benefits. These ideas may prove difficult to implement as, while the process of evaluation, in terms of the outcome of the project is probably more easy to ‘pin-down’ the process of evaluation as the project is ‘live’ is not something which is as open to evaluation, as the project is in many ways immutably drawn.

Marketing and business strategies are very competitor orientated and in many ways the contemporary business environment is predicated upon the knowledge of what direct competitors are working on. In this sense, the marketing strategy of any one company or corporate actor will rarely be singular; rather it will be exposed to other actors in the field who may seek to reproduce or to imitate it. This is very different from the situation where a product is considered within the same scenario, since a product may be copyrighted or protected with patenting processes. These principles may not apply in the same way to the idea of a marketing strategy. The world of marketing strategy is fast-paced and the disadvantages which may arise from competitors imitating the project implementation design often have to be accepted as part of the world of business. The disadvantages are of course; also offset within this context, since marketing strategists will rarely be in the situation where their inspirations and ideas are not augmented by those of others.

Globalisation

Globalisation refers to the idea that the world is more interconnected and cohesive than it has been historically (Gattiker, U. (2001) 3-4). The effects of this process may therefore be descried as a higher incidence of movement between particular cultures and more mixed populations. Concepts are more likely to have global resonance and significance in a globalise world. What used to be targeting of sectoral cleavages by necessity has been replaced by a choice which is available to business actors; and this choice involves global messaging or more localised efforts to communication business messages to prospective customers, and customers (Barrett, B. et al. (1998) 15 and 147) (Gattiker, U. (2001) 3-4).

A number of factors have influenced the development of this process. Perhaps the most significant of these is the rise of technological developments, one important example of this being the internet. The internet has been defined as follows: ‘The Internet is not one place or one company. It is a descriptive term for a web of thousands of interconnected broad- and narrow-band telephone, satellite, and wireless networks built on existing and planned communication technology. This infrastructure is a network of networks, reaching out and connecting separate islands of computer, telephone, and cable resources into a seamless web. It connects businesses, governments, institutions, and individuals to a wide range of information-based services, ranging from entertainment (e.g., pay-per-view movies, online music videos), education, and culture to data banks, cyberspace commerce, banking and other services… (Gattiker, U. (2001) 3-4)’.

The internet has facilitated communications (Gattiker, U. (2001) 3-4), as have inventions and innovations (Howes, D. (1996) Chapter One, Inge, M. 1-20, Jennings, B. and Heath, R. (2000) Chapters 1-5), such as the mobile telephone, video-messaging and conferencing, electronic mail, instant messaging, internet ‘blogs’ to name just a few (Gattiker, U. (2001) 3-4) (Crews, C. and Thierer, A. (2003) 272-285). The fact that travel is more readily available and more accessible than ever before has also facilitated the process of interconnection (Gattiker, U. (2001) 3-7). As a secondary result of this increased level of cohesion in terms of communication, it has become possible for different nationalities and cultures to operate more closely together particularly within the sphere of business (Crews, C. and Thierer, A. (2003) 272-285).

Globalisation has joined local communities, regional communities with the global economy and the rest of the world. The idea that business cannot be due simply due to physical separation, or that business may be thwarted due to physical separateness is slowly becoming a thing of the past. Globalisation has made the internet a source of income, and even in less direct logistical terms, the internet is a mode of business transaction. Various secure methods of transaction are now possible via the internet, and in a sense these functionalities of the internet represent another lifting of the barriers which were once imposed due to physical separation.

However, conversely the internet, and the globlised features and interconnectedness which it is married to are not full-proof. The globilised world is one which incorporates arguably higher degrees of risk. The dangers associated with systems failure, technological hitches or indeed the lack of internet security which can be problematic for business remain as features of the internet which suspend its effectiveness or trustworthiness somewhat. Nevertheless, Globalisation may be regarded as a process which has shaped and influenced many of the concepts which will be analysed as part of this thesis. Globalisation has enabled business targets to be realised more readily on a global scale (Crews, C. and Thierer, A. (2003) 272-285) (Gattiker, U. (2001) 3-4).   

Standardisation and Adaptation

Standardisation and Adaptation are both theoretical frameworks which are used to describe the processes of strategic change within corporate organisations. This section will define and explain these concepts and will outline the approach which will be taken throughout the thesis in relation to the wider relationship between these two and business and marketing strategies. Quantitative data will be looked at but the main thrust of the analysis will be the qualitative extrapolation of any data, since this approach is arguably more compatible with the ultimate aims of the thesis. These results will be extrapolated to give a more in depth insight into the current debate about which approach to business strategy; adaptation or standardisation is the most effective method of advancing and realising business and marketing objectives within the ultimate parameters of business success.

What is Standardisation?

Standardisation is the generic label given to extending and applying domestic standards relating to products to target markets, even where these are located in foreign markets. As a theory, it postulates that products sell more successfully when they are marketed uniformly in a targeted manner. This concept facilitates the process of branding since there will be little variation between the strategies employed in marketing strategies across different countries, thus the product which is targeted at the market is often basically uniform in terms of tangible and intangible features (Bergersen and Zierfuss (2004) 39). Many commentators including (Bergersen and Zierfuss (2004) 39) have highlighted the similarities between standardisation and globalisation, and have further argued that the process of market homogenisation facilitates market standardisation (Bergersen and Zierfuss (2004) 39). 

The process of globalisation has been regarded by some as a vehicle for the standardisation model of business strategy. It may be argued that globalisation, by its very nature is denominated in similarities, and therefore the most effective way to take advantage of how globalisation has increased the marketability of products is to maximise their recognisability by adopting a uniform approach to the marketing strategy which introduces the product to the market in the first place (Barrett, B. et al. (1998) 15 and 147). However, this is an obvious rationale, and it is not one which offers a robust support of the standardisation model. Essentially the standardisation model presupposes that the strategy which introduces products into the global market is high risk and therefore it is advisable for costs to be kept down through preserving the uniform nature of the product itself.

This is the essence of how the standardisation model may be defended. However, this essential strength of standardisation is refutable by those who have argued that to assess risk is these terms is counterproductive, as economising in this way may thwart the actual availability of the product itself which is the mechanism through which the initial cost of the product research, development and marketing may be recovered. The main advantages of the standardisation model may be regarded as the pooling of knowledge and the reduction of costs within the promotion of the product at market level. It has also been argued that standardisation indirectly increases product visibility and recognition, and that it increases product safety thus stimulating customer loyalty (Bergersen and Zierfuss (2004) 39).

What is Adaptation?

Adaptation refers to the local and domestic context of business and marketing strategy (Barrett, B. et al. (1998) 15 and 147) (Edgell Becker, P. (1999) 180), and is the direct opposite of the process of standardisation. The process of adaptation refers to the marketing of products within given cultural contexts (Barrett, B. et al. (1998) 15 and 147), where marketing strategy and other business processes tend to vary according to locality and domestic situations (Barrett, B. et al. (1998) 15 and 147) (Crews, C. and Thierer, A. (2003) 272-285).

The term adaptation is often taken to mean the same thing as customisation (Bergersen and Zierfuss (2004) 39), and the general approach within each concept advocates a variation of business process to suit foreign marketing environments and conditions (Barrett, B. et al. (1998) 15 and 147) (Bergersen and Zierfuss (2004) 39). This variation or process of adaptation naturally involves an attunement to local cultural conditions and well as environment and factors such as political conditions (Crews, C. and Thierer, A. (2003) 272-285) (Barrett, B. et al. (1998) 15 and 147).

Advocates of this approach to marketing strategy argue that the adaptation of business processes allows for a more targeted, more effective individual approach to the achievement of business objectives. This follows the argument that in order for a marketing strategy to be effective it must take into account the local context where marketing activities are targeted, and in doing so they are able to attune and bolster the performance of their products in competitive markets (Barrett, B. et al. (1998) 15 and 147).

Criticisms

Commentators such as Onkvist and Shaw ((1990) quoted by Bergersen and Zierfuss (2004) 40) have argued that the standardisation model is a false economy. This postulation is made on the basis that while standardisation may reduce expenditure in the first place, the deficits which adaptation causes are more than compensated for through increased revenue from responsive markets, which have been targeted effectively. In making this argument the arguments, relating to increased levels of homogeneity among customers (Bergersen and Zierfuss (2004) 40) (a process linked with globalisation) which have been advanced by advocates of standardisation are rejected. The contrary suggestion is that customers are becoming more interested in and motivated by diversity and that price is not the most influential factor which influences customer purchasing behaviour (Bergersen and Zierfuss (2004) 40).

Critics of the standardisation and adaptations models have argued that the dichotomy between the two models is a false one. These critics (including Douglas and Wind (1987) quoted by Bergersen and Zierfuss (2004) 40) have argued that the most valuable way to take advantage of the market place in terms of strategy is to adopt a contingency approach which, very pragmatically combines the strengths and weaknesses of each approach and recommends that the approaches should be in a sense merged within one spectrum, with circumstances dictating which approach to lean more towards. Perhaps this argument may have its own weaknesses. It is arguably something of an extrication to critique two schools of thought by saying that they are best merged. Pragmatism is very pertinent to the idea of marketing strategy, however the idea of a strategy is to formulate a plan, and where one cannot see the theoretical parameters of a given plan, this makes it difficult to evaluate and critique it. It also makes it difficult to replicate or to attribute success to anything other that luck or circumstance.

This critique of standardisation and adaptation therefore has no parameters and while it is difficult to criticise an argument which is so abstracted, it is perhaps fair to say that the critiques of standardisation and adaptation, who advocate a replacement model which incorporates the two schools of thought have advanced a facile critique of the separateness of each school, which has not credibly addressed or isolated the weaknesses within each model. It is not therefore particularly credible to argue that the separation between standardisation and adaptation is an artificial one. In any event a theory should not be laid to rest simply because it has weaknesses.

It is acknowledged that perhaps some of these weaknesses would be ironed out with an amalgamation of the two approaches; however this is too abstracted a critique of the two models to really be credible. In any event to advocate an effective replacement of two schools of thought with one which is simply an aggregate version of the two which are the subject of the critique is a weak proposition. This is essentially to mount a critique by saying that one has a choice between the two schools of thought, however, this is an obvious proposition, and it fails to address the individual strengths and weaknesses of either model.

Perhaps a more credible analysis is advanced more persuasively by Jain (1999, quoted by Bergersen and Zierfuss (2004) 44) who has argued, in favour of the standardisation model. His argument is that there is a measurable relationship between strategy, organisational features, external environment and firm performance (Bergersen and Zierfuss (2004) 44). Jain (1999) advances this argument as a way to qualify the recommendation of Douglas and Wind (1987) as outlined above. Jain’s argument is in turn qualified by Theodosiou and Leonidou (1999) Bergersen and Zierfuss (2004) 45) who argue that the process of measurement is important, and they go further than Jain by proposing which factors would be best to have regard to during the measurement process; these factors include sociocultural (Cerulo, K. (2001) 26), political-legal, and the physical situation of foreign markets. 

Branding, its importance and the challenges of the global market

The idea of branding has become critically important in the contemporary world of marketing strategy. This section will deal with defining a brand in abstract terms and a discussion will follow as to how exactly the concept of branding relates to contemporary society and contemporary business and marketing processes within this society. The idea of the brand as it operates in relation to the challenges of the global market will also be considered in this section. A brand is an image or an amalgam of images and ideas which communicate business rationales about products and services to customers, and prospective customers (Barrett, B. et al. (1998) 15 and 147). Typical examples of branding are the Mc Donald’s logo, Nike and the mobile phone company Orange.

Business is a fast paced and highly competitive environment (Barrett, B. et al. (1998) 15 and 147), and in many ways the idea of branding has grown up around these characteristics of contemporary business practices. It is also the case that businesses almost universally need to built and maintain a solid customer base, which means that customer loyalty is routinely encouraged as a method of facilitating better business performance. These characteristics of marketing and business assist with a businesses’ entry into international markets as it augments the recognisability of products and services (Barrett, B. et al. (1998) 15 and 147), thus making the quality of the product or service synonymous with the brand and/or the company.

Branding has also been referred to as corporate advertising. Indeed this is how the company Aviva, have attempted to project their image on the global marketing stage. Here is how the Aviva brand in introduced on their corporate website: ‘The Aviva brand is about life and vitality – helping our 35 million customers worldwide to make the most of their lives.Aviva became the new name for the former CGNU in July 2002. The change represented part of the group’s planned journey towards being recognised as a world-class financial services provider.

The Aviva brand brought together more than 40 different trading names around the world and created further opportunities for the group to harness the benefits of its size and international capabilities. Today the Aviva brand is alive and trading in more than 20 countries with Aviva now the world’s fifth-largest insurance group (based on gross worldwide premiums for the year ended 31 December 2005). Group chief executive Richard Harvey said: “We are creating a new and powerful international financial services brand. “The benefits of this change are significant. We are able to make more of our corporate brand, to the benefit of our trading businesses. We are also making more effective use of our marketing spend, particularly in advertising and sponsorship

In 2005, and 2006, the company ran two successful corporate advertising campaigns, publicising two slogans, one ‘playing fair with your future’, and the other ‘forward thinking’. These advertising campaigns were targeted at through the adaptionist model of business and marketing formula.

The first campaign, ‘forward thinking’ was aimed at opinion formers, investors and the financial community. As a response to poor product performance in the UK (this is discussed in more detail below), Aviva needed to bolster investment from within the UK. The advertising campaign was therefore aimed at attracting the attention of readers of the Economist and the Financial Times with themed articles running in each of these publications. This was combined with billboard campaigns in London, at the Eurostar tunnel and in various underground stations

This may be contrasted with the ‘playing fair with your future’ campaign which was targeted to address concerns in Italy and Spain where industry instability and poor customer morale had been identified. This had essentially arisen from a take over of branches of the Commercial Union in these countries. The Aviva campaign was aimed primarily at grassroots customers, and hence the more customer focused mode of address (playing fair with your future). Aviva Spain won an industry award for their brand campaign, from the magazine Actualidad Economica , and this was seen as highly significant in terms of the overall success of their targeted campaign which aimed to adapt to commercial and economic conditions within local environments, following research into these factors.

Branding must therefore be thought about in the context of the challenges involved in the global marketing environment (Shaw Sailer, S. (1997) 1-15, Sibley, D. (1995) Chapters One and Two)(Smith, H. (1949) Chapters One and Two) (Smith, M. and Kollock, P. (1998) Chapter One, Staley, E. (1939) 1-20). The global

Studding Restorative Justice – Dissertation Sample

Your assumptions about place of punishment in society been affected by studding restorative justice

Zwelethemba epitomises restorative justice in South Africa, is popular amongst the community and is the essence of maintaining peace in the community (Roche, 2004: 85).  Money received from this programme is contributed back into the community to reduce poverty and unemployment and attempt to remove the need for ‘draconian repressive measures (Roche, 2004: 231).  This programme has clearly achieved the acceptance Professor Acorn considers to be implausible, improbable and impossibly demanding (Acorn, 2004).  Blakemore suggests that social policy should evaluate how those policies impact on peoples’ lives (Blakemore, 1998: 5).

Where law has been defined as binding obligations on the one hand and a duty on the other relating to the expectations of that society (Kidder, 1983: 21 citing Malinowski), a method of redress needs to limit any effects of deviation.  A framework of behaviour needs to be adhered to, described as primary when survival depends upon them (Tebbitt, 2000: 42 citing Hart).  This complicated arrangement between law and morality enforces acceptable behaviour which is maintained through rules and principles:  “the cement of society” (Elliott and Quinn, 1998: 449 citing Devlin). 

This ‘cement’ illustrates legal moralism that has been identified as ‘socially significant’ (Cotterrell, 1989: 1), providing an analysis of law’s conceptual structures (1989: 3).  Emerging from these theories are different schools of thought, some suggesting that laws are the result of morals, others refuting this.  Some considered law was allied closely to the ethos of society whilst others sought a link between law and politics.  All of these conflicting views, however, impacted on the relevance and concepts associated with restorative justice.

This essay briefly traces the various theories of jurisprudence associated with the development of law in association with social policy and discusses advances in the Criminal Justice System.  It then examines the contention vis-à-vis the role of punishment in society, then attempts to reconcile these various aspects whilst focussing on the efficacy of restorative justice and, most especially, personal assumptions  about the place of punishment in society following personal investigations into the role of restorative justice.

DISCUSSION

A fidelity to maintaining society’s principles prevented ostracism within simple societies, described by Durkheim as ‘mechanical solidarity’ (Pampel, 2000: 57).  This supported the economic, political and social ethos interacting within those communities where an interdependence on duty overcame the need for self-interest (Pampel, 2000: 57).  This concept of social realism emphasised the importance of shared values, ultimately influencing individuals’ behaviour (Pampel, 2000: 57).  A high level of social cohesion existed in simple societies as a result, with morality reflecting social realism realised through the use of law to safeguard the essence of its continued success (Lanser and Vanstone, 1998: 83).  With the growing complexity of more developed societies more rules were needed to maintain acceptable behaviour (Elliott and Quinn, 1998: 434).

A decline in shared values as the result of organic solidarity differentiating collective conscience creates an environment for an increase in crime.  Durkheim observed a greater differentiation impacted directly on the types of punishment issued to offenders (Wacks, 1987: 164).   Controversially Durkheim considered a particular amount of crime increased solidarity amongst communities as they banded together in an attempt to ‘stamp it out’ (Pampel, 2000: 67) and that “crime is normal and valuable in a healthy society” (Cotterrell, 1992: 159), with shared morality reaffirming ideas of right and wrong thereby increasing the importance of law through violations of accepted rules instil potential meaning into the rules (Pampel, 2000: 67).

This increased solidarity creates emotional bonds within social groups, the outcome of which leads to an identification of crime as deviance, with a recognition of diversity between types of deviance in different societies (Pampel, 2000: 67).  This philosophy of inter-related support has been recognised as structural functionalism which, taken to extremes, acknowledges that poverty and crime are normal and natural functions within any healthy society (Pampel, 2000: 75).  Professor Acorn elaborates on a similar concept with her ‘Three Pillars of Restorative Optimism’ which relates to a reciprocity between justice and punishment being replaced by that of justice and mutual respect.  This is followed by the tenets of faith altering the offender’s concept of society’s acceptance, to which Professor Acorn adds the application of faith through victim-offender encounters initiating healing in the victim (Acorn, 2004).

Weber also recognised the importance of power and authority in society, especially in relation to its effect on social policy and the state as a whole, although his views followed a different path to Durkheim (Levin, 1997: 24).  Law could be rationalised into ‘formal systems’ and ‘substantive systems’, dependent upon their degree of self-sufficiency.  Those laws, once applied, could be distinguished as ‘rational’ or ‘irrational’(Wacks, 1987: 169).  Authority and the development of power within society was largely a result of economic advantage (Pampel, 2000: 106), whilst “much policy making (in respect of political parties)….is about maintenance of the status quo and resisting challenges to existing values” (Bachrach et al, 1970). 

Antonio Gramsci (1971) takes this a step further by commenting:  ‘the life of the State is conceived of a continuous process of formation and superceding of unstable equilibrium” (Clarke & Newman, 1997: xiv).  Weber “defined formal rationality as the use of calculation to weigh cost and benefits and the search for maximum efficiency to guide conduct (subject to existing laws, rules and regulations)” (Haynes, 1980: 9).  In relation to the size of the community or state, Weber considered “only formally rational rules that regulated conduct could deal with the massive size of the modern nation-state” (Pampel, 2000: 109). 

Using the background of the growing bureaucracies, Weber studied the effects of power.  John Stuart Mill, an earlier English jurist wrote in his ‘Essay on Liberty’:  “the only purpose for which power can be rightfully exercised over any member of a civilised community against his will is to prevent harm to others” (Lanser & Vanstone, 1998: 83).  Would the views of JS Mill have been consistent with this use of power in relation to bureaucratic societies?  As part of his study Weber looked at legal authority and suggested that when it involved “laws, regulations, and rules that specify appropriate actions, it guides bureaucracies in modern societies” (Turner, 2000: 140).  He talks about ‘charismatic authority’ which he suggests is “based on social relationships” (Turner, 2000: 140).  Weber suggests that “charismatic authority disrupts the accepted social order and challenges traditional and legal authority” (Pampel, 2000: 114). 

The Royal Commission on Criminal Justice was set up to “examine the effectiveness of the criminal justice system in England and Wales in securing the conviction of those guilty of criminal offences and the acquittal of those who are innocent” (Zander, in Martin, 1998).  The Runciman Commission made 352 recommendations in 1993 from police investigations to disclosure of evidence (Field and Thomas, 1994 in James and Raine, 1998: 40).  All aspects of the criminal justice system came under scrutiny with 600 organisations contributing to its evidence (Martin, 1998: 115).  During this period, the Criminal Justice and Public Order Act 1994, the Criminal Appeal Act 1995 and the Criminal Procedure and Investigation Act 1996 were all implemented with varying interpretations and capricious emphases which altered according to Management changes. 

Pampel observes, however, that  ‘the problems of society become most visible when change occurs, and recent decades have brought immense social and economic changes” (Pampel, 2000: 52)whilst Durkheim noted that society works best when it exercises control over individuals (Pampel, 2000: 72) with Weber maintaining that “societies work more smoothly when the use of power has legitimacy in the eyes of both the rulers and the ruled” (Pampel, 2000: 113) as can be seen from this example:  

Section 5 [Public Order Act, 1986] was initiated in respect 
of a football supporter using foul language in the presence 
of a woman and two small children:  ‘Broadbury Road, Case 319: 
the incident occurred in the family stand at a football stadium. 
The suspect was observed using obscene language when a woman 
with small children was sitting directly in front of him. He was 
warned but continued to be abusive, claiming when arrested that 
“I ain't fucking done nothing” (HMSO, 1986). 

This illustrates the subjectivity of law’s application to society.  Expletives, as an integral expression of language, have reflected the rapid social changes within this last century.  This ‘offence’ is a subjective interpretation which, although offensive, should be interpreted in correlation with the situation:  in this case acknowledged as an offence due to the presence of the two small children.  To paraphrase Rawls whose recognition of “this public concept of justice as others accepting the same principles of justice and the basic social institutions satisfying those same principles” (Rawls, 1986), the purpose of law is to dispense justice and, in a perfect world, laws being just would result in the outcome realised in justice.  Laws are intended to regulate relationships with the result that conflict is avoided, enabling government and education to progress 
 
Deterrence, retribution, rehabilitation and incapacitation constitute the four major theories of punishment.  Deterrence aims to reduce crime through threat of punishment, or through its example.  The concept is that the experience of punishment would create an impact unpleasant enough to prevent any further offence.  Penalties are established to prevent crime being contemplated, with the idea that the example of unpleasant consequences would make potential criminals reconsider any future offence.  Retribution requires an offender to contribute community-based endeavours through proportionality related to the crimes committed. 

The concept involves cleaning the slate through enforced labour to account to society for any misdemeanour.  With the intention of better justice through more consistent sentencing the White Paper preceding the Criminal Justice Act 1991 suggested “that convicted criminals get their just deserts”.  This concept does actually limit the State’s power through limiting exemplary sentences, achieving parity when two offenders receive similar punishments for similar crimes.  The National Victim Support Programme was considered a way forward with respect to society’s acceptance of restorative justice but “both of the major political parties have pursued half formed and in many ways half hearted policies in relation to victims of crime. There is little indication of change in this area” (Newburn and Crawford, 2002: 117).

Conformity through inner positive motivation exemplifies the theory of rehabilitation, although it has been criticised for disparity in proportionality.  The concept is not based on the degree of offence committed or focused on the criminal’s past, but on future rehabilitation to preclude re-offending through changes of circumstances.  Conversely, incapacitation recognises that some offenders fail to respond to deterrence or rehabilitation and continue to commit crimes as and when an opportunity to do so presents itself.  For criminals with this mindset the only option is protective sentencing to prevent further crimes being committed, thereby punishing the offender for crimes committed with a further implication of punishment for future crimes that could be envisaged if released.

An equally important part of restorative justice must be in measures to prevent crimes being committed.  Funding of £6 million has been invested in a Government programme to reduce crime.  Some of these measures include restorative justice, enforcement of financial penalties, CCTV initiatives, treatment of offenders, youth inclusion initiatives, targeting policies and intervention work in schools. To be effective in developing suitable policies the criminal justice system need to approach the problem from different angles simultaneously, and adopt a policy of co-operation and co-ordination across all involved parties. Since the inception of the Regional Crime Squads (South cited in Maquire, 1994, 423), co-operation has existed across autonomous police forces, and surveillance & intelligence squads can acquire information which, along with co-operation from the other agencies which make up the criminal justice system, can be collated and used to prevent some of the worst excesses of violence and crime erupting.

Novick argues that the basis of the State is ‘a need for a single and efficient protective association in a territory’  (McCoubrey & White, 307) with Jacques considering that ‘economic efficiency needs to be assessed in respect of its impact on human feelings, on community and on social relationships and the quality of life in society’  (Jacques, 1976, 15).  Adjudication provides a formal mechanism for resolving disputes, with rules of change available to deal with new problems requiring further elucidation and rules of recognition involving prerogative powers and the sovereignty of Parliament.  These rules do not account for those natural rules which acknowledge those inherent fundamental human rights.  “Justice is traditionally thought of as maintaining or restoring a balance or proportion and its leading precept is often formulated as 'treat like cases alike and treat different cases differently’ ” (Hart, 1998).

Finnis observes that “The prohibitions of the criminal law have a simple justifying objective: that certain forms of conduct including certain omissions shall occur less frequently than they otherwise would. The 'goal' of the familiar modern system of criminal law can only be described as a certain form or quality of communal life, in which the demands of the common good indeed are ambiguously and insistently preferred to selfish indifference or individualistic demands” (Finnis, 2002).  This acknowledges that each individual is aware that deviation from society’s code of behaviour would result in sanctions being applied to avoid injustice.  Buy dissertations online at our service

The ethos Finnis applies to his explanation of retribution is considered to rectify the distribution of advantages and disadvantages by depriving the convicted criminal of his freedom of choice in proportion to his unlawful act.  Regardless of theories, an escalating scale of crimes continue to be committed with 5.2 million offences recorded in England and Wales during 2000 (Recorded Crime, HMSO Press Release, 19/01/01) which, when compared to 3.87 million in 1989  and 479,40,018 in 1950 has an effect on long term projections in the prison population to 2008 (HMSO Press Release, 23/05/01).  Evidence of this was exhibited when the disturbances in Strangeways prison took place in 1990, prompting the Woolf Report (Custody, Care and Justice, HMSO, 1991).  It was published as a White Paper in 1991 and highlighted the relationship between overcrowding in prisons and the maintenance of control, promoting ongoing discussions about the aims of imprisonment. 

Meanwhile, the crime response and solving rate has fallen from 45% to 29% despite the number of police officers having increased from 63,100 to 126,500 (Figures taken from Cautions, Court Proceedings & Sentencing, Home Office press release Nov 2001).  Maguire suggests that “increasing numbers of police officers, an increase in telephones making reporting easier, increasing use of insurance, and reduced levels of public tolerance to violence have all contributed” (Maguire, cited in Croall, 1997).  Stern recognises the system often precludes dedicated people from a more effective route of exacting retribution (Stern, 1989: 247).

The diversity of ideas and practices associated with the restorative justice movement exemplify the difficulties associated with the concept.  Johnstone (A Restorative Justice Reader:  Texts, Sources, Contexts) highlights the paradigm of justice associated with practical experimentation that underlies the values and ideas which involve a number of models of theoretical law covering criminal and civil law together with restorative justice.  The relevance of this earlier part of the essay reflects the ethos of restorative justice:  this is not a new concept, nor can it be viewed in isolation.  The philosophy of Aquinas is as relevant as Mill, Rawls as relevant as Durkheim or Weber.

CONCLUSION

Restorative justice is identified through mediation, conferencing, circles and reparative boards such as the utilisation of victim/offender mediation with the intention of reconciliation as opposed to merely conciliation.  Three areas to be concentrated on in this respect are the how restorative justice can fit into the existing criminal justice system and the identity of a modern definition of community, followed by the ethos of forgiveness.  Whilst this concept has relevance in today’s society, human rights issues and society’s concepts of punishment’s role create a rhetoric which still needs to be resolved.  

Restorative justice is viewed with suspicion due to concern amongst the community in relation to appropriateness of restorative justice for cases of violence and the appropriate punishment in such cases.  Added to this, the managerialism and financial control have impacted on the restorative justice movement.  Where there is no precedent, the focus of control is balanced between local and central government, with penal reform likely to be forced into the background as “the front bench Home Affairs spokesmen of both the major parties battling to 'out-tough' each other, there appears little prospect of coherent and forward-thinking policy-making” (Newburn and Crawford, 2002: 178).

Individual and collective morality would assume that offenders should be punished to maintain the stability of the community and maintain their safety.  Our collective conscience ensures that the majority accept the rule of law and accept that deviance needs to be punished.  Many organisations have highlighted the growth in recorded crime despite these measures in place to punish the offender.  Punishment falls into various areas from incapacitation to retribution, deterrence to rehabilitation.  Professor Acorn is not a supporter of restorative justice.  She considers its ethos implausible in respect of social relations and psychology, resulting in improbable and impossibly demanding results revealed in empty words.  Acorn suggests that vengeful impulses demand retribution and likens her descriptive ‘first pillar’ to “Pollyanna-ish wishful thinking” (Acorn, 2004). 

Psychologically, restorative justice is assumed to invoke aesthetic sentiment of forgiveness for miscreants and release for victims.  What it fails to do is provide society with assurances that their safety and integrity will be maintained in an atmosphere where the offenders’ rights appear to be upheld in variance with those of the victim, or the fundamental rights the victim is entitled to expect.  A personal view could be recorded which considers that restorative justice exhibits illusionary tendencies to pacify the reformers at the expense of society’s status quo.  Clearly, not a supporter of restorative justice this writer intuitively distorts the semantics and cognitively refers to this concept as retributive justice:  more aptly named, and far more appropriate for the majority of offenders who, regardless of intervention programmes to rehabilitate them will continue to offend despite society’s best efforts.

BIBLIOGRAPHY

  • Acorn, Annalise (2004):  Compulsory Compassion:  A Critique of Restorative Justice.  Vancouver:  University of British Columbia Press
  • Bachrach & Baratz cited in Bachrach, Peter and Morton, S (1970):  Power & Poverty:  theory and practice:  Oxford, UK:  Oxford University Press
  • Blakemore, Ken (1998):  Social Policy:  an Introduction:  Buckingham, UK:  Open University Press
  • Clarke, J & Newman, J (1997):  The Managerial State:  London, UK, Sage, p.x; 
  • Cotterrell, Roger (1989):  The Politics of Jurisprudence:  a Critical Introduction to Legal Philosophy:  London, UK:  Butterworths
  • Cotterrell, Roger (1992):  The Sociology of Law:  London, UK:  Butterworth
  • Elliott, C & Quinn, F (1998):  English Legal System (2nd ed):  Essex, UK:  Addison Wesley Longman Ltd
  • Finnis, (2002):  Natural Law:  the Classical Tradition.  In Jules L Coleman and Scott Shapiro (eds).  The Oxford Handbook of Jurisprudence and Philosophy of Law.  Oxford:  Oxford University Press.  Pages 1 – 60.
  • Gregson, S & Livesey, F (1993):  Organisations & Management:  Butterworth- Heinnemann Ltd, Oxford
  • Hart in Elliott, C & Quinn, F (1998):  English Legal System (2nd ed):  Essex, UK:  Addison Wesley Longman Ltd
  • Haynes, Robert J (1980):  Organisation Theory and Local Government:  London, UK:  George Allen & Unwin Ltd
  • James, A & Raine, J (1998):  The New Politics of Criminal Justice:  Longman, London
  • Jaques, E (1976):  A General Theory of Bureaucracy:  Heinemann, London
  • Johnstone, Gerry (2003):  A Restorative Justice Reader:  Texts, Sources and Context.  Devon:  Willan Publishing.  ISBN: 1 903 240-81-6
  • Kidder, RL (1983):  Connecting Law and Society: an Introduction to Research and Theory.  Englewood Cliffs, USA:  Prentice-Hall
  • Lanser, M & Vanstone, B (1998):  A Level Law – Letts Study Guide:  London, UK:  Letts Educational
  • Levin, Peter (1997):  Making Social Policy:  the Mechanisms of Government and Politics and how to investigate them:  Buckingham, UK:  Open University Press
  • Maguire, M(1977), cited in Croall, Hazell (1977):  Crime and Society in Britain.  Oxford: Oxford University Press 
  • Martin, J (1998):  The English Legal System:  Hodder & Stoughton, Oxford
  • McCoubrey, H & White, N (3rd ed) (1999):  Textbook on Jurisprudence:  London, UK:  Blackstone
  • Pampel, Fred (2000):  Sociological Lives and Ideas:  Basinstoke, UK:  Macmillan
  • Rawls, John (1986):  Distributive Justice.  In Robert M. Stewart, (ed), Readings in Social and Political Philosophy, pp. 196-211. New York & Oxford: Oxford University Press, 196-211
  • Roche, Declan (2004):  Accountability in Restorative Justice.  Oxford:  Oxford University Press
  • Stern, V (1989):  Bricks of Shame: Britain's Prisons, (2nd ed) Harmondsworth: Penguin
  • Tebbitt, Mark (2000):  Philosophy of Law:  an Introduction:  London, UK:  Routledge
  • Turner, Stephen (ed) (2000):  The Cambridge Companion to Weber:  Cambridge, UK:  Butterworth
  • Wacks, Raymond (1987):  Jurisprudence:  London, UK:  Blackstone
  • Zander in Martin, J (1998):  The English Legal System:  Hodder & Stoughton, Oxford
  • Home Office (1991): Custody, Care and Justice: The Way Ahead for the Prison Service in England and Wales. Cm 1647. London: HMSO
  • Home Office Research Study No. 135. Policing Low-level disorder: Police use of Section 5 of the Public Order Act 1986.  London:  HMSO
  • Woolf, H (1991): Prison Disturbances April 1990 Report of an Enquiry. Rt. Hon. Lord Justice Woolf (Parts I and II) and His Honour Judge Steven Tumin (Part II) Cm 1456 (London: HMSO).
  • Newburn, Tim; Crawford, A (2002):  Recent Developments in Restorative Justice for Young People in England and Wales: Community Participation and Restoration British Journal of Criminology 45, no. 2 (2002), pp. 476-495.
  • Melville, R and Todd, C (2000):  UK Drugs Legislation (Online)
  • Recorded Crime, HMSO Press Release, 19/01/01

Economic Activity – Dissertation Sample

Movement of persons within the EU are very much still dependent on economic activity

The European Union largely grew out of the European Economic Community (hereafter the “EEC”), which was established in 1957. The EEC was, as its title suggests, an exclusively economic association of Member States and as such the rights and liabilities that it settled had an exclusively economic identity or origin. It can come as no surprise therefore that the original right of free movement was bestowed only on economically active citizens, namely EEC workers.

However, since 1957 and in particular since Maastricht  (the Treaty of the European Union) in 1992, several Treaties have intervened to amend EC law with the aim of further integrating the Member States and deepening the concept of Union between them. Quite early in this process a meaningful and valuable concept of EU citizenship was identified as an important goal and an essential ingredient in the continuing integration of the Member States. The Treaty of Amsterdam, 1997 declared as follows:

 "Citizenship of the Union is hereby established. Every person holding the  nationality of a Member State shall be a citizen of the Union. Citizenship of  the Union shall complement and not replace national citizenship.'

As a consequence, Article 17 EC now sets down the political declaration that every person holding the nationality of a Member State shall be a citizen of the Union. Some of the rights of the EU citizen are conterminous with those bestowed on economically active nationals of Member States.  Article 18 EC provides that:

 “Every citizen of the Union shall have the right to move and reside freely  within the territory of the Member States, subject to the limitations and  conditions laid down in this Treaty and by the measures adopted to give it  effect.”

Moreover, the Treaty of Amsterdam incorporated the Schengen  provisions into the framework of the European Union, thus eliminating checks at internal borders between those Member States that have signed the Schengen Agreement (this currently does not include the United Kingdom).

Casual travel around the European Union

All citizens of a Member State of the European Union, are entitled to enter any other EU country without the need to comply with special formalities. All that is needed is an identity card or valid passport. The right to travel does not depend on the individual circumstances of the citizen. Whether an EU citizen intends to travel for private or professional or reasons, whether he or she is working in an employed or self-employed capacity or whether a citizen is merely a tourist, the right to travel anywhere in the European Union is now enshrined in EU law.

A citizen’s right to travel around the European Union may be restricted only on grounds of public policy, public security or public health. A citizen’s family members are entitled to travel with the applicant even if they are not themselves nationals of a Member State of the European Union. It is unnecessary to apply for a residence permit if the citizen’s stay in a Member State other than his or her own does not exceed three months. The only obligation which may legally be imposed in certain states is the requirement for a travelling citizen to notify the authorities of his or her presence. Typically this obligation is fulfilled automatically when a citizen checks into a hotel or when a landlord completes a declaration regarding a new tenancy.

Article 39 EC

Freedom of movement became a core component of the Single Market and is undeniably still today one of the most important aspects of EC law. The Treaty of Rome established the right of free movement around the European Economic Community in what is now Article 39 EC.  Article 39  sets down the following provision:

 “1. Freedom of movement for workers shall be secured within the  Community.

 2. Such freedom of movement shall entail the abolition of any discrimination  based on nationality between workers of the Member States as regards  employment, remuneration and other conditions of work and employment.

 3. It shall entail the right, subject to limitations justified on grounds of public  policy, public security or public health:

 (a) to accept offers of employment actually made;

 (b) to move freely within the territory of Member States for this purpose;

 (c) to stay in a Member State for the purpose of employment in accordance  with the provisions governing the employment of nationals of that State laid  down by law, regulation or administrative action;

 (d) to remain in the territory of a Member State after having been employed in  that State, subject to conditions which shall be embodied in implementing  regulations to be drawn up by the Commission.

 4. The provisions of this Article shall not apply to employment in the public  service.”

A fundamental economic freedom, Article 39 includes in particular: the right to obtain employment in another Member State; the right to move to that Member State; the right to settle in the other Member State for the purpose employment; the right to remain in the Member State for the purpose of that employment; and the right not to be discriminated against while carrying out that employment.

The Treaty does not elaborate on the meaning of ‘worker’, presumably to leave the European Court of Justice a free hand to apply its contextual and purposive brand of interpretation so as to afford the concept the widest possible definition in case law. True to form, the European Court has produced a generous interpretation of who will qualify as a worker in the cases that have come before it. In Lawrie-Blum v Land Baden-Württemberg  the essential characteristic of a worker was found to be:

 “the performance of services for and under the direction of another in return  for remuneration during a certain period of time.”

Due to concerns that disparate national definitions would obfuscate and confuse the application of the freedom of movement, the Court has ensured that the concept of worker remains the exclusive province of EC law. Levin v Staatsecretaris van Justitie , underlined the fact that the definition of worker is not to be determined by the national laws of the member states, which inevitably would vary from state to state causing anomalies and creating uneven labour rights across Europe, defeating the overarching scheme of the Single Market.

Also in Levin v Staatsecretaris van Justitie the European Court ruled that the term worker embraces part-time workers as long as the employment concerned is genuine work of an economic nature and not merely nominal in nature. Moreover the cases of Kempf v Staatsecretaris van Justitie  and Steymann v Staatsecretaris van Justitie  and underscore the Court of Justice’s purposive and inclusive attitude to this criterion, buttressing the observation that relatively minimal and limited economic activity will nonetheless justify recognition of the rights provided by Article 39.

In any discussion of the rights of free movement it is important to note the derogation provision set down in Article 39(3) and 39(4). These provisions may justify exclusion of the free movement right on grounds of public policy, public security or public health and both free movement and non-discrimination at work rights can be denied where the public service exemption is justified. In brief, these provisions are narrowly construed so as to give the fullest possible scope and effect to the free movement rights deemed so important to the integrity of the Single Market and the broader plan of EU integration.

One illustrative case in point on the public policy derogation is Rutili v Ministre de L’Interieur,  which concerned the rights of an Italian political activist. The European Court of Justice held that restrictions on the movement of an EC national under Article 39(3) cannot be justified unless the behaviour of the individual constitutes a genuine and sufficiently serious threat to public policy in the sense of the interests of democratic society.

Citizenship and Free Movement: Recent Case Law of the European Court

As stated above the orthodox approach to the EC law on free movement has resulted in the attachment of rights flowing from the EC Treaty and EC legislation to an economically active “worker”, and the equivalent rights of others, such as family members, have typically been derived from the worker.

In a series of relatively recent decisions, the European Court of Justice has challenged this traditional approach to the allocation and protection of EC free movement rights. In the seminal case of Rudy Grzelczyk v Centre Public d'Aide Sociale d'Ottignes-Louvain-la-Neuve,  in which a student was refused a Belgian social security benefit purely on the ground of his French nationality. Despite the fact that this action by the Member State was technically lawful under the relevant EC legislation, the European Court ruled that the claimant’s status as a citizen of the EU entitled him to be treated in precisely the same way as Belgian citizens in this context.

The Court famously declared that citizenship of the European Union:

 “is destined to be the fundamental status of nationals of the member states,  enabling those who find themselves in the same situation to enjoy the same  treatment in law irrespective of their nationality, subject to such exceptions as  are expressly provided for” .

Thus, in this ground breaking decision the European Court confirmed the existence of a broad new avenue for the bestowal of EU rights. Although Grzelczyk was neither a worker in the sense of Article 39 nor a person whose status entitled him to the benefit under Belgian law, EC law was found in this purposive interpretation to require the claimant to be treated in the same way as Belgian citizens on the strength of his European Union citizenship.

Other recent cases have elaborated on the potential scope of European Union citizenship rights. The central issue in Collins v. Secretary of State for Work & Pensions  was whether the stipulation of United Kingdom law that entitlement to a certain social security benefit was subject to proof of “habitual residence” in the UK was compatible with EC law. The applicant was an Irish citizen who had resided in the UK for a short time in the1980s but had spent most of the intervening period in the United States of America.

On his return to the United Kingdom, the applicant was denied the benefit Jobseekers’ Allowance on the grounds that he was not habitually resident in the UK. The Court of Justice found that as a citizen of the European Union, the applicant enjoyed a prima facie right not to be discriminated against on the grounds of his residence and benefited from a right to be treated on an equal footing with other ordinary citizens of the UK. The Court required the United Kingdom to demonstrate that the test of habitual residence was objectively justified in so far as it constituted a potential discrimination against citizens of other EU member states.

The Court of Justice developed this line of authority in Chen v. Secretary of State of the Home Department, . Here the claimant was a Chinese national who entered the United Kingdom when she was six months pregnant, then travelled to Ireland and gave birth to her daughter. Under the normal operation of the nationality rules of the Republic of Ireland, the baby became an Irish citizen and as a consequence also a citizen of the European Union. The claimant and her daughter thereafter sought permission to remain in the United Kingdom. The European Court of Justice concluded that Article 18 EC provided that the baby, as an EU citizen, possessed the right to long term residence in the UK, and that her mother also enjoyed that right as her primary carer – at least where neither would prove to be a burden on the public finances of the United Kingdom.

The cases of Grzelczyk, Collins and Chen demonstrate that European Union citizenship rights are slowly being developed by the Court of Justice so as to establish a European area in which nationals of Member States are entitled to move freely across borders and reside in other member states. They also indicate that when such a right is exercised the citizen in question is entitled to be treated equally with that of the host state’s nationals.

This jurisprudence constitutes a significant and highly purposive development of the EC Treaty’s rules on free movement of workers and it is submitted that the applicable case law and legislation on Article 39 had by this point in time begun to be subsumed or supplanted by the general implications of EU citizenship rights.

In the D’Hoop case , which was decided in 2002, Ms D’Hoop, a Belgian citizen, finished her secondary education in France, where she obtained a diploma recognised by the Belgian State as equivalent to the Belgian upper secondary education certificate which permitted students access to higher education. After studying at university in Belgium, she made an application for a tideover allowance. She was however refused that allowance on the ground that she did not comply with domestic requirements.

In its judgment, the Court of Justice adhered to the Opinion of Advocate General Geelhoed in analysing the position from two different perspectives: first, traditional free movement of worker provisions were considered; and, secondly, the concept of citizenship of the Union was applied to the facts.

The European Court ruled that Ms D’Hoop was neither entitled to rely on the rights conferred by the Treaty upon migrant workers nor the derived rights bestowed upon the members of the families of such workers. In point of fact the Court found that the application of Community law to freedom of movement for workers in the context of national rules regarding unemployment insurance requires that a person seeking to rely upon the freedom ‘must have already participated in the employment market’, except in circumstances where young people are seeking their first employment.  Buy dissertations online at our service

Regarding the citizenship aspect of the case, the European Court of Justice ruled that, because an EU citizen is entitled to enjoy in all Member States the same treatment in law as that accorded to the nationals of those Member States who find themselves in the same circumstances, it follows that it would not be compatible with the right of freedom of movement if a citizen, in the Member State of which he is a national, to receive treatment less favourable than he or she would enjoy if he had not taken the opportunities offered by the Treaty in connection with freedom of movement. Moreover, the Court stated that this consideration is particularly important in the field of education.

The Court of Justice noted that Belgian legislation had created a disparity in treatment between Belgian nationals who had done all their secondary education in Belgium and those who, having taken advantage of their freedom to move, had obtained their diploma of completion of secondary education in a different Member State. Accordingly the Court found that such inequality of treatment was contrary to the principles which underpin the status of citizen of the Union.

However, the Court conceded that the condition at issue could be justified, provided that it was based on material and objective considerations independent of the nationality of the persons concerned and that it was proportionate to the legitimate aim of the national provisions.

Therefore, after acknowledging that the disputed tideover allowance was intended to facilitate the transition from education to the employment market for the benefit of young people, the Court conceded that it was legitimate for the national legislature to insist on a tangible connection between the applicant for that allowance and the geographic employment market concerned. However, notwithstanding this concession, the European Court of Justice found that a single condition concerning the place where the qualification had been obtained was too exclusive and general in nature to be defendable, that it improperly favoured a characteristic that was not necessarily representative of the true quality and level of connection between the applicant and the relevant employment market, and that it was disproportionate to the end to be achieved, in that it went beyond what was necessary to attain the stated objective.

Concluding Comments

It is the opinion of this commentator that the concept of EU citizenship is no longer in its infancy. To address the quote featured in the title to this paper, for many years citizenship of the European Union did indeed offer little more than a hollow glimpse hope. However, from the perspective of 2006 it is submitted that EU citizenship has now grown in status and legal efficacy to constitute an essential and fundamental component of the EU legal order. The boundaries of the European Community have long since been rolled back from their original, exclusively economic horizons and the scope of European Union reach and activity seems to extend with every passing year. The European Court has unleashed the full force of its purposive and teleological interpretative policy to invigorate the legal significance of the concept of citizenship. In terms of free movement and residence law, citizenship appears to be slowly manoeuvring the qualification of economic activity out of its previously dominant and unchallenged position in the legal hierarchy.

As the Court grandly stated in Grzelczyk, EU citizenship is destined to become the “fundamental status of nationals of Member States”. This portentous phrase has become a mantra of the Court of Justice, and several opportunities have been taken to restate it in the same terms: see for example Bidar v London Borough of Ealing . By 2005 the Advocates General responsible for advising the Court had been left in no doubt that their Opinions should reflect the growing scope and legal significance of EU citizenship. In his Opinion in Standesamt Stadt Niebüll  in which the substantive issue concerned the name of a child and a conflict between two national legal systems, Advocate General Jacobs stated that:

 “It thus seems to me totally incompatible with the status and rights of a citizen  of the European Union – which, in the Court’s phrase, is ‘destined to be the  fundamental status of nationals of the Member States’ – to be required to bear  different names under the laws of different Member States… A rule of a  Member State which does not allow a citizen of the European Union, whose  name has been lawfully registered in another Member State, to have that name  recognised under its own laws is not compatible with Articles 17 and 18(1)  EC.”

The European Commission has also affirmed that Union citizenship should constitute the fundamental status of EU nationals. That said, this is obviously a controversial position, in particular in Member States where levels of so-called Euro-scepticism run high, and as a consequence there are many national governments  unwilling to support the assertions of these two key European Union institutions.

The recent case law discussed above represents a purposive and focused development of the EC Treaty provisions on free movement of workers and as stated it is submitted that the applicable case law and legislation on Article 39 is now likely to be subsumed and or rendered nugatory by the general implications of EU citizenship rights.  It is submitted that more Court of Justice decisions will clarify this trend and allow more specific conclusions to be drawn. At the point of writing further cases on the meaning and effect of Article 17 EC and in particular Article 18 EC are at various stages of the legal pipeline  and the rulings ultimately handed down in their resolution will prove informative in the debate on which this paper centres. It is confidently submitted that these anticipated decisions will further extend the boundaries of the rights inherent in EU citizenship and continue to develop the utility and efficacy of this nascent status.

How such decisions will be received by sceptical Member States is another matter, and the future evolution of EU citizenship probably depends more on the national political consensus than it does on the pro-activity of the European Court of Justice and the will of the other EU institutions. In the meantime, the economic gateway criteria entailed in Article 39 are still important, but how long they will remain influential is a matter for debate. Ultimately, it is submitted, citizenship will become the foundation for the bestowal of all rights and duties, in particular those of free movement and establishment, but we are not there yet.

BIBLIOGRAPHY

  • The Treaty of Rome (1957 as amended)
  • The Treaty of the European Union (Maastricht Treaty) (1992)
  • The Treaty of Amsterdam (1997)
  • Reich, Norbert (2005) The Constitutional Relevance of Citizenship and Free Movement in an Enlarged Union, European Law Journal 11 (6), 675-698
  • Shuibhne, Niamh, (2004) Legal implications of enlargement for the individual: EU citizenship and free movement of persons, ERA – Forum vol. 3, pp. 355-369
  • Textbook on EC Law, Steiner and Woods, (2003) Blackstone
  • Iliopoulo, A., and Toner, H., A new approach to discrimination against free movers? D’Hoop v Office National de l’Emploi, European Law Review, (2003), pp. 389 et seq.
  • Law of the European Union, Kent, P., (2001) Longman
  • Text, Cases and Materials on European Union Law, Tillotson and Foster, (2003) Cavendish
  • Europa: Gateway to the European Union
  • EC Legislation 2005-2006, Foster (2005) Blackstones Statutes
  • EU Law- Text Cases and Materials, Craig and de Burca, (2003), Oxford University Press
  • Cases and Materials on EU Law, Weatherill, (2005) Oxford University Press

Theories of Race and Ethnicity – Dissertation Sample

Theories of race and ethnicity have influenced media discourses of the war on terrorism

It is generally true to say that the American and British media have a shallow and dilettantish understanding of the race and ethnicity of the ‘enemy’ in the ‘war on terrorism’. These enemies who are principally Arab and Islamic extremists or fanatics represent, demographically, a miniscule percentage of the Muslim world. Nonetheless, the Western media’s presentation of ordinary Muslims is often identical or almost identical to their presentation of Islamic terrorists and fundamentalists. Both are shown as hateful and zealous religious extremists whose first aim is to overthrow Western democracy and freedom. There is often hardly even a crude attempt to discern between and present differently genuine terrorists and ordinary, peaceful and law-abiding Muslims.

It is as if the Arab media were suddenly to infer from the IRA bombings of Britain in the 1980’s and 1990’s that all white Irishmen are inclined to commit terrorism. No thinking Muslim could ever contemplate a suggestion as ludicrous as this; Muslims legitimately ask in protest therefore why their race and ethnicity should be portrayed so simplistically and crudely in the Western media. The uniform and monotonous image of Arabs and Islam promoted in the media arises from that media’s lack of comprehension and empathy with the Muslim world and the Islamic religion.

Only a more thorough and objective analysis of the Islamic world can push back the media’s impulse to be influenced by racial and ethnic stereotypes about Muslims. If this re-orientation and re-education takes place it will be possible to wash away from the minds of the general public popular illusions about the involvement of Muslims in the ‘war on terrorism’ – for instance the assumption that Muslims in Saudi Arabia, Afghanistan, Indonesia and Britain all share the same terrorist inclinations as Al Quaeda or the P.L.O. This essay explores the Western media’s perceptions of the Arab race and of Muslim ethnicity, and suggests how these perceptions influence its coverage and commentary on the ‘war on terrorism’.                                                       

                                                                    ‘The Islamic teachings have left great traditions for equitable and gentle 
                                                                     dealings and behavior, and inspire people with nobility and tolerance. 
                                                                     These are human teachings of the highest order and at the same time 
                                                                     practicable. These teachings brought into existence a society in which 
                                                                     hard-heartedness and collective oppression and injustice were the least  
                                                                     as compared with all other societies preceding it. … Islam is replete 
                                                                     with gentleness, courtesy, and fraternity. 

                                                                                                                                                        (H.G. Wells, 1932)  

 H.G. Well’s eloquent words are one example from many of a long tradition in the West of admiration and appreciation for the civilization of the Arabs and the religion of Islam. Of late, the Western media has forgotten about the cultural achievements born in the Islamic world and seems no longer able to discern between the general Muslim public and the extreme groups of Muslims who commit atrocities such as the September 11th attacks or this week’s bombings in London. Our newspapers and television stations typically present Muslims – be they terrorist or civilian! – in the following way.

They are either groups of young Muslim men torching American flags in the streets of Palestine whilst wielding rifles; or they are a crowd of agitated Muslims outside a mosque in north London preaching lessons of jihad; or they are the inculcated and indoctrinated disciples of the Quran. The implication in all these examples is that all Muslims are susceptible to the temptations of terrorism and can be seduced by leaders such as Osama Bin Laden. There is of course a patent absurdity in such a simple presentation: the massive majority of Muslims in all countries oppose, for instance, the train bombings in Madrid or the bombings of U.S. embassies in Kenya and Tanzania.

This image is sustained by the propaganda of stations such as CNN, ABC, and the BBC, who often display scenes of Arab children waving signs showing the face of Osama Bin Laden or Mohammed Omah, to imply that young children are on a large scale manipulated and recruited into terrorist causes. Newspaper slogans shout ‘This Fanaticism that we in the West Can Never Understand’   and ‘In the Heart of London Demands for Holy War’ . If Muslims so unreflectingly projected upon modern Christians all the terrors that have been carried out in the name of that religion, and if Christians were to glimpse that projection, then we might have far greater empathy with the present simplistic image of racial and ethnic incitement in the Muslim world.  

By promoting such racial and ethnic prejudices the Western media reveals what one scholar calls its Islamophobia (Tape, 2003). In Britain, newspapers such as The Sun, The Mirror, The Daily Express and the Daily Mail recklessly promote Islamophobia when they know the true nature of affairs is far less simplistic than they suggest. Amongst these misrepresentations perhaps the most serious is that which paints Islamic ethnicity as an intransigent and monolithic structure that systematically abuses women, squashes human rights, and loathes democracy. The media’s subscription to this political idea is highly dangerous. It convinces television and newspaper audiences that the actions of a few fanatical terrorists are typical of Muslims generally and so perpetuates an idea that the ‘war on terrorism’ is against a far wider segment of the Muslim world than it really is. For instance, British newspapers frequently imply the similarities between terrorist cells training and operating in Afghanistan or Iraq and the activity of Muslim communities in north London or Bradford

James Bignell’s book Media Semiotics (Bignell, 2002) and S. Rampton’s Weapons of Mass Destruction (Rampton, 2003) have showed that television, newspaper and radio audiences intrinsically trust the message they are given – no matter what form this media arrives in. Thus the responsibility upon the media to make an accurate portrayal of race and ethnicity as an influence upon the ‘war on terrorism’ is even more essential. Yet the media’s language about Muslim race and ethnicity often fails to separate Muslim terrorists from Muslim civilians. 

As an example, the recent Runnymede Trust’s ‘Islamophobia’ survey (Runnemede, 2004) found that 85% of members of the British general public thought the terms ‘terrorist’, ‘extremist’, ‘fanatic’ and ‘fundamentalist’ synonymous with ‘Muslim’. Such a warping of the truth can only have been facilitated by the media’s own indiscriminate use of these terms. Moreover, whereas ten years ago these terms were only thought to be synonymous with Muslims abroad, now words such as ‘fanatic’ are being associated with British Muslims who are British citizens also. 

How then did the Western media come to possess these theories about the race and ethnicity Islam and its influence upon Muslim terrorism? The Runnymede Trust’s 2004 report illustrated that ideologically, spiritually and culturally Islam is perceived as the ‘other world’ by the Western media, and as the diametrical opposite alternative to Western democracy. The media cements an ‘us and ‘them’ conflict that makes the rift between the two civilizations appear far wider than it is – remember, of course, how many Muslims live in Britain as legitimate British citizens and who are an essential part of that civilization.

But The Daily Mail for instance still produces inflammatory headlines such as ‘Fanatics With a Death Wish: I Was Born in Britain, but I Am a Muslim First’ . Slogans like these are intended to make wider the perceived ethnic differences between Christians and Muslims, and to imply that Muslims give their allegiance to terrorist cells before their country – of course, most Muslims are fully abiding citizens. Moreover, all credible Islamic councils and Islamic scholars in the U.K. explicitly condemn the actions of men like Bin Laden – though such condemnations are rarely reported at depth in the media.

R. Ferguson argues in The No Nonsense Guide to Terrorism (Ferguson, 2003) that the mainstream Western media is largely complicit in promoting stereotypes they know to be false; at the same time they ignore acts of equivalent terrorism authorized by their own governments or committed by their own citizens. The Quran is represented equally simplistically and is implied to be a reservoir of motivation and justification for terrorists. Little attention is given by the media to the fact that the Quran is in the enormous majority of instances used for the spiritual enlightenment of Muslims — and only rarely for the purpose of justifying jihad or terrorism! 

Likewise, the media is to be reproached for sensationalizing and dwelling upon the lives of lone extremist clerics such as Abu Hamsa. The Daily Mail, for instance, in September 2001 featured the same provocative photo of Abu Hamsa on seven consecutive days (Sep 13th-20th) even though he is a radical and unwelcome figure in the mainstream British Muslim community.  This disproportionate focus upon the lives of extremists is intended to fix in the readers’ or listeners’ minds the prejudice that all Muslim’s share opinions like those of Abu Hamsa. In the words of Tape:

      ‘Abu Hamsa has become the press’s mythical, personified construct that 
incorporates all the Islamophobic stereotypes that have become the  
pretext for much contemporary reporting. He is the Islamophobe’s 
perfect caricature’ 

The effect of such media campaigns against ordinary Muslims is to, paradoxically, increase the likelihood of terrorists being created in Britain. In Britain the sensationalized images of extremist figures are now being mixed up with accusations that these figures are illegal immigrants who are accepting benefits whilst absconding from their duties as British citizens. Since the media has already established that these figures are typical of Muslims generally, it becomes easier to suggest that ALL Muslims are illegal immigrants abusing their status in British life waiting to betray it. 

All these points were eloquently portrayed in a caricature published in the Daily Mail on September 25th, 2001. A image is shown with a group of young Muslim men waiting before the Houses of Parliament waving posters proclaiming ‘Death to Britain and America’; the quotation beneath says: ‘Parasite: (Chambers English Dictionary): a creature which obtains food and physical protection from a host which never benefits from its presence’. The implication in this crude cartoon is that this definition of a ‘parasite’ is applicable to all Muslims and not just terrorists associated with Osama Bin Laden. Thus through the parts of the media’s manipulation of the true state of affairs, millions Britons are duped into support for a largely phantom war on terror – phantom when applied to the majority of Muslims.

Gunter Grass, the eminent German novelist, has said that it is not preposterous to call analogies between the gross distortion of truth by Nazi propaganda against the activities of the Jews in Nazi Germany and popular European sentiments towards Muslims Grass has even suggested that Islamophobia induced by misrepresentation of the ‘war against terrorism’ could become so intense as to produce a repetition of the Kristallnacht incident in 1938. Grass says that the intention of this media technique – practiced assiduously and methodically by the major U.S. news stations – is to demonize the enemy to such an extent that it is possible to justify almost any action against them. 

The celebrated French social-scientist Jean Baudrillard argues that modern media is ‘hyper-realistic construct where the real and the imaginary continually collapse into each other’(Baudrillard, 1990) . In other words, the judgment of the public is so easily won that they near automatically believe whatever is presented to them about the war on terror. Noam Chomsky has written at great length on this subject of media inculcation and terrorism (Chomsky, 1989, 2001). Chomsky argues that modern news had a fundamental institutional bias towards white elites in America and Britain. Thus in his film Manufacturing Consent (Chomsky, 1992) Chomsky says ‘Propaganda is to democracy what the bludgeon is to the totalitarian state’ – an instrument for the coercion of opinion.

The racial and ethnic predispositions of the Western media create a basic prejudice against news that does not conform to their mindset. Thus news has to pass through five ‘filters’ – a filter of ownership, profit making, government organization, pressure groups, and popular journalistic conceptions – before it can be passed to the public. When Chomsky’s model is applied to the media’s coverage of the war no terror it produces striking results. Chomsky speaks of ‘Paired examples’ whereby news that has passed through these five filters receives a totally different presentation to news that has not.

For instance: the train bombings in Madrid passed these filters and so received the appropriate thoroughness and responsible coverage appropriate to the disaster. On the other hand, the indiscriminate destruction of a school in Baghdad in which fifty school-children die does not pass the filters and so there is little or no coverage of the Arabs and Muslims who perceive this act as an act of terrorism against them. Chomsky has on this point that ‘The wanton killing of innocent civilians is terrorism, not a war against terrorism’ (Chomsky, 2001). By this definition the United States were terrorists in Vietnam, Afghanistan and Iraq. Chomsky accuses the media of failing to give any precise definition to the terms ‘race’, ‘ethnicity’ or ‘war on terrorism’. The advantage of such nebulous and foggy concepts is that they go unquestioned by the general public because they have no precise shape. 

 Although some of Chomsky’s opponents think the idea of ‘manufacturing consent’ is almost a renewal of the idea of ‘false consciousness’ in Marcuse’s One-Dimensional Man (Marcuse, 1990) – a novel in which the general public are so indoctrinated by propaganda that they need special elites to advise them how to think – he is universally admired amongst the West for, amongst other things, demanding that the mainstream media question more thoroughly their assumptions about Islam and its influence upon the ‘war on terrorism’.

In the final analysis, theories and misconceptions of race and ethnicity have produced a fundamental and unnecessary bias in the Western media’s coverage and discourse on the ‘war on terrorism’. Moreover, this war threatens to spill over into a war of ideologies where the secular democracy of the West is in combat with the authoritarian and dictatorial regimes of the Middle East. There are many responsible journalists and media organizations working in the West who are aware of the subtleties and intricacies of the ‘war on terrorism’ and who conscientiously make the distinction in their coverage between Muslim terrorists and the vast bulk of ordinary Muslims who reject terrorism.

The British newspaper The Independent is an outstanding example. All too often however this simple yet vital distinction is lost. Muslim’s of all attitudes, behaviors and beliefs are huddled together to be demonized by the press as zealous religious extremists intent on overturning Western democracy and civilization. Prejudiced theories about the racial and ethnic constitution of Islam and Muslims feed these misrepresentations. The vital task for the future of American and British journalism is to educate themselves deeply about the racial and ethical makeup of the Middle East so that they might provide the ‘war on terrorism’ more objectively and perhaps facilitate conciliation between ordinary Muslims and the West.  Buy dissertation on any other topic at our site                         

BIBLIOGRAPHY

  • Chomsky, N. (1989). Necessary Illusions. Pluto, London. 
  • Chomsky, N. (1989). Language and Politics. Black Rose, Montreal.
  • Chomsky, N. (2001). 9-11. Seven Stories Press, New York. 
  • Chomsky, N. (1992) Manufacturing Consent: Noam Chomsky and the Media. Zeitgeist Films.
  • Barker, J. (2003). The No Nonsense Guide to Terrorism. Verso, London.  
  • Baudrillard, J. (1990). Fatal Strategies. Pluto, London. 
  • Bignell, J. (2002). Media Semiotics: An Introduction. Manchester University Press, Manchester.
  • Burke, J. (2003). Al-Qaeda: Casting a Shadow of Terror. I.B. Tauris, London.
  • Ferguson, R. (1998). Representing ‘Race’: Ideology, Identity and the Media. Arnold,  London.
  • Friedman, G. (2004). America’s Secret War: Insidethe Hidden World-Wide Struggle Between America and Its Enemies. Doubleday, New York. 
  • Jewett, R. (2003). Captain America and the Crusade Against Evil: the Dilemma of ZealousNationalism. 
  • Marcuse, H. (1991). One Dimensional Man. Routledge, London.
  • Parenti, M. (2001). The Terrorism Trap: September 11 and Beyond.  City Lights, San Francisco. 
  • Pilger, J. (2003) The New Rulers of the World. Verso, London. 
  • Rampton, S. & Stauber, J. (2003). Weapons of Mass Deception: the Uses of Propaganda in Bush’s War on Iraq. Robinson, London.  
  • Runnemede Trust. (2004). Islamophobia: Issues, Challenges and Actions. Trentham Books, London. 
  • Tape, N. (2002). Gender Discrimination and Islamophobia. Islamic Human Rights Commission. 
  • Wells, H.G. (1932). After Democracy: Adresses and Papers on the Present World Situation. Watts & Co., London. 

Media a Version of Reality – Dissertation Sample

Assess the Argument that Media representation has to be viewed as a version of reality

It is commonly accepted that in our postmodern age, the notion of ‘reality’ is a function of perception and perspective. Many intellectuals, in particular semioticians, view the concept of completely objective truth to be an outmoded fallacy, and that truth, such that it exists, is found in many different incarnations – philosopher Daniel Chandler’s notion that all realities are not equal.

But it does not require a doctorate degree to understand that two people can experience the same event and remember it and/or perceive it differently, and ascribe different ethical and moral properties to it.  Furthermore, the relaying of information – anecdotal, historical, educational – from party to party, is subject to the same potential pitfalls.  To the extent that media plays a powerful, if not supreme role in the relaying of information to the public, they can exert a considerable amount of influence over how an event, which is taking place in a geographic location distant from the viewer, is perceived by that viewer.  In fact, what is actually happening at that location may not truthfully be what the media is portraying to the audience.  In short, the media are prone to present their own version of reality.

Historically, the news aspect of the media has sought to provide to the audience as many different perspectives on newsworthy events as practically possible, and then let the audience draw their own conclusions.  The non-news aspects of the media, which are primarily oriented towards entertainment, generally offer fictionalized versions of human stories and events which contain exaggerated or hyperbolized elements calculated to be as entertaining, shocking, salacious, humorous, or provocative as possible in order to garner more viewers, and therefore higher ratings, and then higher advertising revenue, and higher profits.   In the last ten years or so, the distinction between entertainment/fiction and news/non-fiction in the media has become increasingly blurred.  Entertainment programs on television, for example, once were the exclusive domain of fictional drama and comedy material. 

Now, so-called ‘reality’ shows occupy a large space within that same universe, where cameras capture non-actors in what appears to be real-life situations – never mind that the footage of these situations is selectively edited to convey a story the producers wish to tell, never mind whether that particular narrative storyline existed organically during the filming.  On the other hand, news and nonfiction programs have now taken on glitzy, sensationalized elements designed to make themselves more appealing to the audience.  News programs tantalize, titillate, and terrify audiences with headlines such as – “SHARK ATTACK SEASON – ARE YOUR CHILDREN SAFE?”  — leading audiences to wonder if sharks are preying en masse on swimmers, never mind the scientific fact that one is more likely to be struck by lightning than be bitten by a shark.

In blurring the line between fiction and non-fiction, between news and entertainment, the media has distorted the reality of ordinary people’s lives rather considerably.  If aliens were to visit Earth and become readers of American and British newspapers, and become viewers of British and American television in the year 2005, these aliens would likely come to the conclusion that the most important current event issue to the British and American public was the criminal child molestation trial of Peter Pan-ish pop singer Michael Jackson.  The fact that both countries are involved in a bloody and monstrously expensive war in Iraq, after being misled by their respective governments into supporting a dubious military action which is not faring particularly well, would possibly be lost to these alien observers, who – not knowing any better – could very well assume that the tendency of the media to focus on the more base, vapid, and sensationalistic elements of society constituted a true reflection of the reality of British and American societies. 

Sadly, the media’s bad habit of creating news where there is none, and ignoring relevant news in favor of stories that appeal to more base elements of human psychology, contributes to a vicious cycle.  Impressionable viewers, particularly children and teenagers, look to the media to instruct them as to what to believe in and what to think about; what is relevant, and what is fashionable.  In homes where the absence of parental or church guidance leaves a vacuum, the media is all too happy to fill that vacuum with its own distorted sense of reality, a reality dominated by the inexorable gravity of consumer capitalist culture.

This leads us to the issue of advertising and how it affects people’s perception of reality.  As much as the media has evolved, taking over each successive technological invention in the field of mass communications – newspapers, magazines, radio, television, the internet – one engine has remained constant driving it all – the deity of advertising and the number of people the advertiser can attempt to sell its products to.   (All media is funded by ratings – the number of people consuming the information in a particular medium during a fixed portion of time – and the advertising revenue commanded by higher or lower ratings.  The more readers, viewers, or web site visitors, the higher the price can be charged to reach those audience members.  And where ratings — and hence advertising revenue — falter, the underlying medium is considered a failure.) 

Advertising has only one goal, and that is to sell products.  Whether the products are really of any value is rarely relevant.  Products that in a third-word country would be considered a fantastical luxury – like a BMW with leather seats, for example – are advertised to gullible consumers as if they were a necessity.  Mundane products, such as beer or perfume, are connected through their advertisements with desirable objects, such as sexy women or strapping, masculine men, in order to subconsciously tell the audience that if they drink X beer or spray Y perfume on them, they will become more desirable the opposite sex. 

In this way, consumer capitalism works hand in hand with the media to distort reality.  The most terrifying (and genocidal) example is tobacco companies, whose advertising campaigns try to convince potential teenage customers that to smoke is ‘cool,’ when in fact, to smoke will kill you, or as consumer advocate Jef Richards says, “Children are our future' is a phrase coined by tobacco advertisers.”  (Richards, 1990)  The consumer audience is led to believe that its worth and status as human beings is contingent upon their acquisition of products sold in advertisements which accompany programs, news articles, films, etc., which themselves are less and less interested in reflecting reality and more and more obsessed with generating more ad revenue.

Increasingly, the public service value and/or quality of the programming in a medium is less relevant, and ‘bottom-line thinking’ becomes more relevant – or simply put, if it doesn’t sell, it must be replaced.   For example, in decades past, the major broadcast television networks in the United States – CBS, NBC, and ABC – accepted without concern the fact that their news divisions would not be money-generating behemoths.  The accepted business model was that the entertainment divisions would provide the profits to allow the news divisions to engage in their socially important roles of reporting the news as fairly and objectively as possible, without the pressure of having to profitably support advertising revenues.  However, with the growing trend of giant multinational corporations acquiring television networks – often, corporations whose core business has nothing to do with the media – the demand for maximum profits from all divisions creeps into news.  

The most noxious case is that of Fox News, the 24-hour cable news channel owned by media mogul and perennial corporate raider Rupert Murdoch.  As revealed in the critically acclaimed 2004 documentary Outfoxed: Rupert Murdoch’s War on Journalism, Fox News has discarded all semblance of fairness, accuracy, and objectivity — in favor of a blatantly conservative point of view that generally matches the daily talking points of the administration of President George W. Bush.  In a disturbingly Orwellian touch, Fox News’ slogan is “Fair and Balanced,” when in truth their president, Roger Ailes, was a former campaign manager and media advisor to President George Bush, Sr., and to the Republican Party, and the news management issues memos each morning to their reporters demanding their adherence to a conservative ideological slant. 

Their on-air personalities are almost exclusively bombastic personalities with a radically conservative ideology, who are prone to distort facts and even yell and swear at guests during broadcasts who dare to disagree with them.  The aim:  sensationalism and distortion of reality designed to reinforce the conservative worldview of roughly half of the United States’ viewing population.  It has been a spectacular financial success, due to the huge ratings the network has garnered in the past few years.  The casualty: the abandoning of the fundamental principles of journalism; but worse, fair and balanced perspectives on news — on reality itself.

The media clearly has been corrupted by the influence of consumer capitalism, and to the extent that this is true, its ability has been compromised to accurately reflect the same reality experienced by a reasonable cross-section of its audience experiences.  Until this phenomenon is corrected, the schism between realities will only worsen, as will the schizophrenia of Western societies caught in the netherland between those realities.  Buy dissertation on any other topic at our site

BIBLIOGRAPHY

  • Chandler, Daniel.  Semiotics.  Routledge Publishers, 2001.
  • Parenti, Michael.  Inventing Reality: The Politics of News Media.  Wadsworth Publishers, 1993.
  • Richards, Jeff I.  Deceptive Advertising: Behavioral Study of a Legal Concept (Communication). Lawrence Erlbaum Associates Publishers, 1990.
  • Greenwald, Robert (director).  Outfoxed: Rupert Murdoch’s War on Journalism.  Carolina Productions, Film Transit International (distributors), 2004.

Google Print’s – Dissertation Sample

What is the academic publishers view of Google Print’s entry to the library project?

In 2005, Google decided to launch a controversial campaign to digitize the world’s information and place it all in a digitally accessible and completely free, ad-sponsored database called Google Print.  Initially, although this seems a good idea, allowing readers and researchers around the world access to a host of material only previously held in dusty archives at Harvard university, the revolutionary way in which this project could shape the economics in academic and publishing sectors of the literary market has serious implications for all small academic presses, often highly dependent on library sales and copyright to make enough profit to stay afloat.

Although the initiation of Google Print has not been without its obvious difficulties, especially concerning copyright, the reduction of whole libraries into simple ad-sponsored information, and the questions many companies (who later filed a lawsuit) have over the ways in which this project will distribute the wealth gained from pay-per-view sales and advertising revenue is a particular point of concern for publishers.  However, despite this, many different academic publishers and libraries were keen to embrace and exploit this new initiative, seeing in it a way of disseminating works that would otherwise only be available in specific libraries.  Susan Kuchinskas says that, long before the first version of Google print was released, “Google had begun working with University of Michigan, Harvard, Stanford and Oxford Universities, and the New York City Public Library to digitize public-domain books in their collections.”  

This initially clandestine operation to digitize the worlds books has serious implications for academic publishers, the ways in which people read and consume the text, and also poses a serious threat to the world of the printed book, a legacy that has protected, and indeed heralded the small academic printing presses as a bastion for disseminating high-quality information that other, more mainstream publishers would choose to ignore.  Barbara Fister and Niko Pfund say that independent presses are more passionate about important literature:  “[t]rade houses in New York City said ‘good book, but no way we can publish it.

This stuff is radioactive.’ So the University of Minnesota Press took it on—and got in a world of trouble with conservative critics and eventually its legislature. The press could have lost its funding, could have lost everything, but the publisher didn't back down.”   Indeed, the funding mechanisms of Google Print has taken a great deal of sustained criticism from authors, scholars and publishers alike, and it is thought that it will take a while before these problems are eventually ironed out, even though there is a certain air of inevitability to the entire venture.

So there are a number of difficulties that the academic presses are pushing to get noticed.  Steven Levy, in Newsweek, says that “[the digitization process] involves Google's success in transforming this informational bounty into an amazingly profitable enterprise. It's based on the very reasonable premise that ads are most effective when pegged to what you want to know, as opposed to what you want to watch.”   Of course their concern is justified to an extent.  The exploitation of various texts available by academic publishers, using ad revenue to add further revenue to the coffers of the massive Google enterprise, is one that has serious potential of undercutting the actual presses that publish the work.  In an email sent out on September 23rd, the Author’s Guild stressed that:

“Google is worth roughly $90 billion, making staggering profits through its online advertising programs. Its investment in Google Library is intended to bring even more visitors and profits to its website and ancillary services. The Guild is all for profit, but when the profit comes from the works of authors, the authors should be properly compensated.”

Indeed, the power of Google here is a very important factor, as the Author’s guild suggested, they are not a charity organisation, and exist primarily on the basis of making profit, even if they do so in a way that provides consumers with a free, easily accessible search engine, and even if they base their business ideology on the free distribution of information.  And also, it has to be stressed that academic presses aren’t exactly in a particularly healthy state as it is.  According to Barbara Fister, this is because of the gradual tightening up and the privatisation of funding given to libraries for distributing their works – a major outlet for academic publishers is the library, who can afford the expensive price of academic work, and can also distribute the information to scholars who generally wouldn’t ordinarily be able to afford access to the material.  In the NACUA conference held on November 10th 2005, Sanford G. Thatcher says that the problem with academic publishing is that “[w]e publish the smallest editions at the greatest cost, and on these we place the highest price, and then we try and market them to the people who can least afford them.”  

So academic publishers naturally tend to take the enterprise into publishing libraries journals online both ways.  Firstly, it is important that these works can become available to a bigger market, as an alternative means of distribution has to be made available beyond the crumbling and under funded American library system – on the other hand, the Google Print project may reduce the amount of money going to the actual publisher by taking effective control over the distribution process in what is effectively a monopoly of library resources.  As the small academic presses are highly dependent on the high profit margins gained from library sales, an effective privatisation of the distribution process could prove disastrous to publishers and therefore to the production of alternative work.

Overall, Google Print has been fraught with difficulties and legal wrangling over copyright issues.  Although the issue of public domain books being published is a relatively easy one, and is already practiced widely on the Internet by websites such as The Gutenberg Project, a non-profit e-books service offering thousands of on-line titles, as well as other sites that offer public domain material for consumption, controversially, Google have decided to offer, at the publisher’s discretion, a pay-per-view option for certain academic and historical titles previously only held in academic storage – in public places such as libraries and museums. 

Naturally, this brings to light a number of questions concerning both authenticity and copyright issues, and whether it is pertinent that Google, despite their laissez-faire attitude to censorship, should be allowed control over such a vast array of archived material.  The Association of American University Presses published an open letter in 2006, suggesting that there is “mounting alarm and concern at a plan that appears to involve systematic infringement of copyright on a massive scale.”   Also, the systematic scanning of copyrighted material for eventual dissemination on a wide scale has prompted the Author’s Guild, a company designed primarily to protect the rights of authors, to file a lawsuit, along with five other publishing companies, against Google’s digitization of copyrighted work, seemingly against the will of the small presses whose works are being digitized by Google.  They argue that:

“Google, a company whose current market capitalisation is over $80 billion and growing, plans to further expand its business by making digital copies of copyrighted works in the collections of at least two major universities, distributing copies to those libraries and displaying portions of those copies, all without permission of the copyright owners.  On the libraries’ side, they are turning copies of copyrighted books over to Google for digitization and receiving digital copies of the books in return.  At least one of them apparently contemplates some form of further distribution […] and all of it […] without permission of the publishers.”

Google on the other hand, stress that there is an opt-out clause, stressing on the official Google blog that “any copyright holder can exclude their books from the program.”   However, the assumption is made that all books from university libraries, providing that they aren’t subjected to the opt-out clause will be at least scanned by Google.  They do however make sure that work cannot be printed, copied or pasted when being accessed by readers:  “Google equates viewing the displayed results of copyrighted works to the ‘experience of flipping through a book in a bookstore’ or library.

  The further protect the copyright holders, Google disables the user’s print, save, cut and copy functions on the text display pages so that the user is limited to reading the information on screen.”   So, seemingly, if the material is impossible to print or distribute, then copyright issues needn’t be a problem.  The reaction to Google among authors groups and publishers is more about their flaunting of copyright rules, which, it is argued, are there to protect the small presses from plagiarism and exploitation by larger, more profitable businesses. 

Google Print have apparently scanned works that are copyrighted and, even though they won’t publish material if an exemption letter is sent to them, the reaction is still that actually fragrantly disobeying copyright laws in such a manner doesn’t bode well in the academic press and among larger publishers, and could be seen as a way of undermining their authority over the distribution and the ownership of the text.  The National Research Council suggest that: “Fundamental to these uncertainties [about ubiquitous electronic access to books] is the matter of ownership, which libraries rarely have, given that electronic information produces no fixed artifact for libraries to possess and cherish.”   It appears that the advent of Google print could change the fabric of ownership on a profound level.

Chapter Six: What does the future hold for Publishing?

So, the advent of Google Print is taken by many critics and journalists as a revolution in the ways in which information is distributed, how the economics of small presses work, the importance of libraries as non-profit social and community services, and in how academic literature (as well as other literature) is read and consumed.  Even in the world of small presses, the impact of the Internet on sales and distribution of literature is impossible to avoid: “The rise of the book superstore has implicitly changed the overall economics of access to books and information. Where once a good public library was the best and most accessible source of materials for many, if not most, communities, bookstores of similar size may be a few doors down the block, open longer hours, and with enough copies of popular titles to satisfy almost all comers.”  And to exacerbate this trend: “most libraries and physical bookstores are dwarfed by online bookstores.”

To date, this has signalled, at first, a radicalisation of the process of bookselling on the street.  Borders marketed themselves on having wide amounts of stock, reading areas, and places where people could feel relaxed and not necessarily coerced to make a purchase.  Thus, the environment is very much like a library, insofar as no pressure is exerted by the company to make a purchase, and reading in such stores needn’t be illicitly conducted.  The process was pushed into the digital realm in 1995 with Jeff Bezos, entrepreneur and founder of Amazon.com.  The National Research Council suggest that:

“Through Amazon.com, large collections of books have come one step nearer to their readers – and no library has played a part in this dramatic convergence of reader and book. In the case of the virtual superstore, the reader controls the atmosphere, which need not be as public as a library or bookshop and may be as comfortable as reading in a lounge chair wearing fuzzy slippers and a dressing gown.”

The impact of online retailers exploiting the small presses is therefore twofold – the increased access to material that would otherwise be unavailable improves sales; but also, importantly, by undermining the role of the library, a major source of income for small academic presses has also been undermined.  One imagines that the digitization of copyrighted work will have the same overall impact as the advent of the bookstore / library, as pioneered with Borders, and also Amazon.com.  But Google’s interest in digitizing work may eventually have the opposite effect as intended; undermining public libraries and support for them, and pushing money from academic retailers to the big players in the distribution of academic material. 

This will have the impact of reducing the amount of variety and the richness of the publishing sector, thereby making it much harder for publishers to take unnecessary risks.  The same pattern, essentially a deregulation of books, is beginning to occur with TV and newspapers.  As James Curran suggests with the print media: “Fewer journalists produce more stories more frequently […].  Understanding requires time, time costs, and reporters everywhere may be becoming more […] vulnerable to the well-packaged official lines produced.”  So, the increased privatisation of the media sector overall, may eventually serve to reduce the amount of funding that goes to small, investigative journals and very small niche markets, which can quite easily extend to the publishing of books online.

Certainly, the whole process of bookselling has changed significantly with the commercial development of the Internet in general, and, undoubtedly the massive process of digitising entire libraries of material, a project that Google intend to partake in as soon as they can clear up the various copyright lawsuits and issues raised by small companies, will have similarly wide-sweeping effects as the beginnings of the process of globalisation and book selling began to take hold through Borders and Amazon.com.  The Google Print service offers to destroy another layer separating the reader from the writer, and, in doing so, by creating a literature-on-demand system, will have profound impacts on the ways in which libraries function, the ways in which not only academic and the small presses function, but also the bigger presses, and will, for better or for worse, change the dynamics and the economics of literature away from the state funded, public sector, as is demonstrated by the slow erosion of funding toward research and local libraries, in favour of the private sector, as epitomized by the technological giants of Google and Microsoft. 

Whether the impact on journals, the academic presses and the larger presses will be positive or negative is very difficult to gauge, and depends to some extent on the behaviour and the responsibility of Google, who are, despite their seemingly benevolent image, an organisation whose primary concern therefore has to be to make profit out of the other presses that will, as readers gradually abandon researching print media for a resource that is more easily available, and has a much greater stock than the average research library, and will therefore be forced to use them exclusively as a means of distribution. 

Also, it is difficult to tell whether this will undermine how the “printed” word is used – instinctively, people are still drawn to the word printed in a book, as the National Research Council suggest: “an online screen is hostile to such prolonged congenial or intense reading.  ‘You can’t take it to bed or to the beach or onto a bed with you,’ is the oft-heard lament.”   Indeed, a great many book publishers actually offer their works online, under the assumption that online reading is much less comfortable than offline, and that it is simply a more satisfying experience overall to be in possession of the book.  The crisis that small publishers are facing definitely will not resolve itself overnight, but, at a time when academic presses are frequently going bankrupt due to a lack of sales, perhaps the revenue generated from the Google Print service will give the sector a welcome boost and revitalise an industry previously held to be in steady decline.  Hopefully this will prove to be the case, and, judging from Google’s relatively good track record in providing adequate supplier-side support, will treat the current lawsuits and problems with copyright as mere teething problems in generating a wholly new en-masse system of distributing information. Buy dissertation on any other topic at our site

Bibliography

  • Curran, J. & Seaton, J., Power Without Responsibility: The Press & Broadcasting in Britain, Routledge, London, 1997
  • Fister, B., & Niko Pfund, We’re Not Dead Yet, from Library Journal, Nov. 5th 2005
  • Fowler, P., , Google and the Book Publishers: Testing the Limits of Fair Use in the Digital Environment, from NYSBA Bright Ideas, vol. 14., no. 2, fall 2005
  • Hanratty, E., Google Library: Beyond Fair Use? from Duke Law & Technology Review, 4th May, 2005
  • Kuchinskas, S., Google Print Goes Live, from Internet News, May 27th 2005, 
  • Levy, S., Google’s Two Revolutions, from Newsweek, Dec 27th / Jan 3rd, 
  • National Research Council, LC21: A Digital Strategy for the Library of Congress, National Academy Press, Washington D.C., 2005
  • Thatcher, S. G., Fair Use In Theory and Practice: Reflections on its History and the Google Case,
  • Official Google Blog.

Levels of Sedation – Dissertation Sample

Levels of sedation and its effects on prolonging mechanical ventilation in critically ill adults

Ventilation and sedation is a procedure that is commonly seen in ITUs and other critical care establishments. It can be use for  a number of different therapeutic reasons and to help to treat a number of different clinical situations.(Marx WH et al 1999)

The need for mechanical ventilation is one of the main criteria for admission to and ITU. This is usually administered by endotracheal intubation (occasionally by tracheostomy) and sedation. Sedation is clearly needed for the intubation to be tolerated, but the degree of sedation is more often then not a matter of professional clinical judgement on the part of the responsible anaesthetist. It is a common observation that optimal sedation levels can both improve the quality of care and also reduce its duration in terms of time.(Rainey TG et al. 1998). It also therefore follows that this may well also reduce the time spent in intensive care. (Kollef et al 1998 (I)

Equally inappropriate or excessive amounts of sedation can increase the length of time spent on the ventilator which has it’s own disadvantages and harmful effects. (Kollef et al 1998) (II)

The purpose of this literature review is to examine the rationales and evidence base behind this combination of therapy. (Berwick D   2005)

Literature review

A good starting point for our considerations is the paper by Brattebø G (et al 2002). It takes as read, our original premise that optimal levels of sedation are imperative for optimal levels of outcome on ventilated patients. It accepts the need for devising some kind of protocol to allow for an improved sedation strategy. The authors devised an observational study which not only introduced guidelines for sedation control, but interestingly also took the opportunity to observe and analyse the actual methods of adoption as the medical and nursing staff tried to implement it. It is this second part of the study design that perhaps sets it apart from many of the prospective studies that we will be reviewing here. (Cochran and Cox. 1957)

The NHS has learned many lessons relating to change management. In an organisation as large and cumbersome as the NHS, the effective introduction of new (or even good) ideas needs careful implementation management. Over the years the NHS has witnessed (and suffered from) the inept management of a newly introduced concept. (Nickols F.2004)

The concept may be good, but the method of its introduction can render it useless. (Berwick D. 1996)

Clinicians who have been in practice for more than twenty five years will remember the debacle of the disastrous introduction of the Griffiths Report (Griffiths Report 1983).

Many would say that, in principal, it was a good workable system, but it’s introduction and implementation was so inept that it was withdrawn before it could reach a fraction of its full potential. (Davidmann 1988) Brattebø’s paper takes stock of this potential for poor implementation and both analyses and assesses the protocol’s introduction to be able to draw appropriate conclusions on the matter.

The study was not on a large scale but this allowed for more careful and intimate observation of the issues under investigation. It was set in one ITU in a University hospital and encompassed all staff and patients (over 18) passing through the unit over an 11 month period.

One interesting strategy was that the authors considered and evaluated different methods of achieving staff conformance as the study progressed, and progressively adopted those methods that were found to be the most effective. (Carey et al. 1995)

The paper is both long and involved. The important outcomes of the investigation were that by optimising the sedation given to the patients, the mean ventilator time decreased by 2.1 days (from an average 7.1 days before the trial to 5.3 days after it). This also allowed a reduction in the mean period of time spent in the ITU by an average of one day (9.3 days to 8.3 days). The authors also record that these reductions and efficiencies were not associated with any increase in adverse incidents (accidental extubations etc.). One particularly significant comment made by the authors was:-

Lessons learnt: Relatively simple changes in sedation practice had significant effects on length of ventilator support. The change process was well received by the staff and increased their interest in identifying other areas for improvement.

The authors describe their methodology as using the “breakthrough method” (Marx et al. 1999) in which they make use of multiple short cycle improvements (Plesk PE 1999). They describe the method as entailing:

Setting goals, choosing appropriate small changes, and measuring whether the changes do lead to improvements; if so, the changes are incorporated in the departmental routines (Rainey et al 1998).  

The backbone of the study was the concept that, in cases of respiratory failure of any cause, patients are generally given both sedation and analgesia most often by the method of continuous infusion. (Brook AD et al 1999)

Commonly, the responsible clinician tends to err on the side of safety and prescribe heavier doses than may actually be necessary, with the result that the patient becomes more heavily sedated than may be required. (Kreiss et al 2000), and the corollary of this is that the patient will tend to spend more time on a ventilator than is absolutely necessary (Kollef M et al. 1998 (I)).

The alternative, say the authors, is to introduce a scoring system which can quantify when the sedation is “sufficient but not excessive” so that the lowest rates of sedative infusion can be achieved with the use of occasional supplemental bolus doses (Barr et al 1995). The natural inference with this system is that this will reduce the time spent on the ventilator. As we have discussed, this was one of the outcome measures of the study.

The actual process is described in detail in the paper and therefore we will not present it here, but it hinged on the application of the MAAS scale (Devlin et al 1999) and correlation with the degree of sedation that was required to produce a particular score on that scale.

The protocol and introduction of the scheme appears to have been managed meticulously well with multiple staff presentations, wall posters, feedback questionnaires, and other management tools. (Shortell SM et al. 1998),

The key note to absorb from this study is that the application of this system resulted in a 30% reduction in ventilator time for the patient on this particular ITU. A fully critical appraisal would have to observe that this could simply be a predictable result  from the introduction of a fixed protocol in a system of lax and unmanaged clinical practice which allowed for a 30% overuseage in ventilator time in the unit. There is no evidence to support or refute this point of view, but it would be charitable to assume that the clinical staff were doing what they thought best at the time. This method allowed them to have a more precise control over the sedation levels achieved. (Henriksen and Kaplan  2003)

It is interesting to note that the authors stated that as the intervention was aimed primarily at the clinical staff, it was not considered necessary  to get patient consent. Although clearly it would have been difficult to obtain direct patient consent, we have to express surprise that proxy consent was not obtained on behalf of the patients from the relatives. (Sugarman J & Sulmasy 2001)

We can now turn our attentions to a paper that we have quoted in support of a point in the Brattebø paper. Kreiss (et al 2000) produced a paper that took a similar approach but with different methodology. This study arose from the observation that continuous sedation levels tended to impede the clinical assessment of the patient and that it was common practice to allow “windows” of lighter sedation to allow neurological assessments together with assessments of the mental state. (Kong et al 2003)

The study compared the outcomes of one group who received reduced sedation at the discretion of the responsible clinician, who determined when appropriate neurological examination was necessary, with the outcomes from a second group who were “woken” on a daily basis irrespective of clinical need. (Brock et al 1998) A critical assessment would have to conclude that the cohort was not particularly large (with about 60 patients in each group), although it is accepted that the numbers of patients available for such a study are, by necessity, limited.  (Grimes et al. 2002)

The results however,  showed surprisingly different outcomes in the two groups.  The mean time on ventilation and sedation in the control group was 7.3 days but was only 4.9 days in the intervention group. It also showed an even more marked reduction in average time in ITU than the Brattebø study, of 9.9 days (control group) to 6.4 days (intervention group).

It should be noted that in this study a number of adverse incidents occurred, including three patients in the intervention group who removed their endotracheal tube.  (Vassal et al 1998)

This actually compared favourably with four who removed their tubes in the control group  (see on)

In the aftermath of the Kreiss paper, Gorman (et al 2004) published a similar study, which considered the effect of interruption of the continuous levels of sedation when patients were undergoing mechanical ventilation in the ITU.

Their object was to assess whether this did have any effect on the length of time that the patient spent on the ventilator. The study was not particularly large (by study standards) but was large in comparison with other ITU based studies with an initial entry cohort of 150 patients.

In contract to the Kollef study (Kollef et al 1998 – see on), this study was truly randomised once the inclusion criteria had been met into two groups. One group had continuous infusion of sedation and the other group were woken daily and then restarted on a regime which entailed recommencing half the dose of medication and then retitrating until the required level of sedation was reached which was defined as level 3-4 on the Ramsay scale. (Brock WA et al. 1998)

In addition both groups were randomly subdivided further to be sedated with either midazolam or propofol, so there were actually four investigation groups.

In broad terms, the outcome measures were the length of time that the patients spent on the ventilator together with the overall time they spent in the ITU and how these outcomes correlated with the method of administration of the appropriate sedative.

Before we consider the results, we must consider the structure of the trial. And compare it with the seemingly similar Kollef trial. There are two major differences in this trial’s design which make it significantly different from Kollef’s trial. One very significant point in the design of the Koleff  trial was the fact that the control group was sedated to Ramsay 3-4 levels in order to match the required levels in the intervention group. 

Although this clearly makes for easier and more direct comparison, it is possible that this level of sedation may actually have been greater than was actually required for that particular patient. This may have had the effect of being a significant  confounding factor in the eventual findings with regard to ventilation time. 

The second problem was that the Koleff study was not truly “blinded” as although the treating physicians did not know which group their patients had been assigned to, the structure of the study required a recording nurse to sit with the patient to record their levels of sedation at all times. This could not fail to have been noticed by the clinicians in charge of the case, and may have introduced a source of bias into the results.

The Gorman trial took account of both of these features so that they were not potential confounding factors in the newer study. The results were still similar although now possibly more statistically valid. The interruption group required less time on the ventilator and less time in hospital than the continuous infusion group. The authors note that these recommendations of interrupted sedation and retitration, have now become standard recommendations for the guidance of clinicians in the ITU setting. (Jacobi et al.2002)

We have refered to the complication of unintentional extubation (UEX) of the patient. This is an unusual but occasional consequence of lightening the sedation level of a patient who is being mechanically ventilated. (Esteban et al 1999 (I)

It is enlightening to therefore consider the study by Boulain (1998) who reviewed this phenomenon. The author took the potentially rather unpromising step of designing a prospective cohort trial to examine the occurrence. He followed nearly 450 patients over a two month period. in the study he recorded that over 10% of patients had at least one episode of unexpected extubation  during the study period, which is rather higher than some of the other authorities have reported.  (Krinsley JS & Barone JE 2005) (Epstien et al 2000). The direct relevance to our considerations comes from the sentence:

 
By use of multivariate analysis, we identified four factors contributing to unexpected extubation : chronic respiratory failure, endotracheal tube fixation with only thin adhesive tape, orotracheal intubation, and the lack of intravenous sedation.

The third factor is obviously artificial, as it is an obvious prerequisite of unexpected extubation  in the first place. (Betbese et al. 1998)

It was noted that, at the moment of unexpected extubation  over 61% of the patients were showing signs of clinical agitation. This is therefore almost certainly an indication that sedation levels were not sufficiently high. The converse of this argument is this later on in the paper the author quotes that of the 46 occasions when this happened, only 28 were reintubated. The implication here is that the 18 that were not reintubated, presumably were judged not to need further assistance with their breathing and therefore, almost by definition, were oversedated just prior to the moment of unexpected extubation . 

It is worth considering the other two papers quoted above, in passing as they add to our discussion of the issue. The Krinsley (et al 2005) paper has just been published with the results of a fairly substantial cohort of patients (100) who had unexpected extubation and compared them to a control group of 200 patients (all mechanically ventilated)

This paper did not specifically look at the factors that were associated with unexpected extubation, but concerned itself mainly with the factors that were the consequence of unexpected extubation. The authors note that there was a mortality associated with unexpected extubation  of 20% which is at distinct variance to the Boulain study (he cited only one death in the unexpected extubation group). This paper cites a reintubation rate of 56% which approximates to the rate of the Boulain study. 

Interestingly, the author performed multiple logistic regression calculations on the results and found that:

age was the only predictor of the need for reintubation after unexpected extubation and that age and the need for reintubation were the only predictors of mortality after unexpected extubation.

These are results which we have not seen in any other published work. The author  also comments that there is a statistical correlation between the actual event of unexpected extubation and an increase in the length of stay in the ITU but interestingly, there is also a statistical correlation with a reduction in the mortality rate. (Chevron et al 1998)

Further statistical analysis showed that the differences in the mortality rate corresponded with the perceived need for reintubation. The patients who were not judged to need reintubation universally had “remarkably good outcomes”. The paper concludes with the comment that:

It remains incumbent on ITU teams to institute protocols for regular identification of patients ready to be liberated from mechanical ventilation.

The Epstein paper  (Epstien et al 2000), is ostensibly on the same issue of unexpected extubation, but it appears to be much more clinically orientated than the other two. For this reason it certainly merits consideration. The study itself was rather smaller than the Krinsley study, with 75 study  patients and a control cohort of 150. Significantly, in this case, the controls were matched for “acute Physiology and Chronic Health Evaluation II score, presence of comorbid conditions, age, indication for mechanical ventilation, and sex.” This makes for better statistical analysis of the overall results.

The authors here found similar results in terms of the patients who required reintubation after UEX had significantly longer stays in ITU and higher mortality rates.

A significant features of this study is a particularly insightful resume of the situation pertaining to unexpected extubation in which the authors review the published literature (obviously this does not include the Krinsey figures). They point to the fact tha unexpected extubation occurs in 3-16% of mechanically ventilated patients. (Zwillich et al 1999). The authors present the rather pragmatic view that:

Successfully managed unplanned extubation has the potential of improving outcome by shortening the duration of intubation, thereby reducing the patient's exposure to complications of mechanical ventilation.

They also state the obvious corollary to this argument that a failure to tolerate unexpected extubation has the potential to reduce the chances of a good outcome by subjecting the patient  “to the complications of a premature removal of (demonstrably) needed ventilatory support. It should therefore not be a matter of surprise that some studies have shown an increase in mortality in that group of patients who effectively failed and episode of unexpected extubation when directly compared to those who did not need reintubation. (Atkins et al 1997). 

As we have observed with other articles in this review, the authors comment that only limited conclusions can be drawn from the comparison between these two groups simply because most patients who failed an unexpected extubation (and therefore needed reintubation) had been on full ventilatory support and this contrasts to the group who successfully tolerated unexpected extubation  (no reintubation) who were generally those patients who were in the process of weaning trials. 

They follow this up with the eminently sensible suggestions that:

Therefore, the unique impact of unplanned extubation on outcome is better studied by comparison with controls not experiencing unplanned extubation. The majority of previous studies, including the only published case-control analysis, suggest no increase in mortality when comparing patients with and without unplanned extubation.

If we consider the figures obtained in the Boulain (1998) study and analyse then in this way then we find that there is actually no increase in mortality if these two groups are directly compared.

This study shows that the mortality that was reported in the other papers was actually almost independent of the fact of the unexpected extubation and directly statistically related to the severity of the underlying illness, the actual cause of the respiratory failure in the first place and also the presence of any other co-existing morbidity. (Torres, A et al 1995).

The authors also point to the fact that two other recent studies (Epstein et al 1998) and (Esteban et al 1999) have shown that patients who had been reintubated within 12 hours of a planned extubation had actually a lower mortality rate than those who were reintubated later than 12 hours.  

Taking an overview of this paper, the only practical positive outcome that it presents (apart from the demolition of findings in other papers)
Is that a successfully tolerated unexpected extubation reduced the duration of weaning trials but “had no other measurable beneficial effect on outcome”

One of the reasons for presenting the data contained in these last few papers in comparative detail, is that it demonstrated clearly the difficulties encountered in making a sufficiently uncritical appraisal of the data and statistical analysis of the figures presented. If one makes a sufficiently critical analysis of the data and study design then some facts can be seen as virtually unchallengable and others can be seen as seriously flawed. (Christie, J. M. et al 1996), 

A number of papers that we have read in preparation for this review are centred on the development of a clinically applicable scale that can be used to accurately and reproducibly ascertain the level of sedation. This is clearly a fundamental problem in the consideration of the thrust of this review as correlating the amount of sedation given with the length of time spent on a ventilator is of no value at all as the effect of a given amount of sedation on a 30 stone fit man will clearly be considerably less than on a frail 8 stone elderly lady. Equally there is a huge spectrum of sedatives and indeed, sedative cocktails, that are in common current usage. 

This is an equally valid point of consideration as the type or mixture of sedatives is not nearly as important as the degree of sedation that the particular prescription actually produces. In this next section of the review we shall consider papers that are primarily concerned with this particular point – the ability to quantify the degree of sedation produced by any particular dosage regime in any given patient.(Sessler W 2004) 

Many of the papers that we have assessed thus far have used competitively crude tools as a measure of the degree of sedation experienced by a patient. Some groups have been exploring more subtle and easily reproducible methods of determining the degree of sedation.

One eye-catching recent paper (Haenggi et al 2004) considered and investigated the possibility of assessing the degree of sedation by producing evoked potentials with auditory stimuli. This particular study was done using 10 volunteers who agreed to be sedated to clinical levels with a number of different sedatives. The paper itself is both long and technical, but the results can be condensed into the statement:

Our results suggest that long latency auditory evoked potentials provide an objective electrophysiological analogue to the clinical assessment of sedation independent of the sedation regime used. 

The clinical implications of this study are considerable. The authors were able to demonstrate that acoustic stimuli are capable of producing distinct and discrete changes in the electroencephalogram (EEG). These changes can be used without demonstrable harmful effect to monitor the degree of sedation virtually continuously in clinical conditions and independently of the type of drug that is actually being used to produce the sedative level. (Roustan, J.-P et al 2005). 

The authors point out that considerably more work k needs to be done in order to calibrate and fully implement such observations in a clinical setting, but an initial assessment would suggest that there may well be considerable clinical potential for this particular application.

Sadly, the prognostic predictions of the Haenggi paper are not confirmed by a paper from Rundshagen (et al 2002). His group also considered the possibility of using auditory evoked potentials to assess the depth of sedation. They specifically targeted the midlatency auditory evoked potentials (MLAEP) Na Pa & Nb potentials and found them not to be of any clinical significance in assessing the degree of sedation. Obviously this does not negate the findings of the Haenggi group, but simply underlines the specificity of their findings.

It is important to consider this work in the chronological context.  Thornton (et al 1998) also considered the evoked response in the Nb range and were able to conclude only that the abolition of the AER three wave pattern was indicative of the attainment of the level of sedation where auditory awareness is lost

In another article, Pockett S (1999) reviews the situation further and concludes that although Auditory evoked potentials in the midrange may seem to have the potential for assessment of the level of sedation, he believes that work needs to be done in the high and low range of evoked potentials to fully ascertain the  true potential of the method.

Another recently published paper (Roustan et al 2005) takes the points explored here and raises the bar further with the evaluation of a full EEG analysis to determine any correlation with the overall degree of sedation.

The authors point to the fact that there are no truly (or sensitively) reliable clinically based scales to assess the degree of sedation of a patient. 
(De Jong, M. M. J. et al 2005)

This particular trial is based on the fact that EEG (spectral and bispectral analysis) parameters have been calibrated and used in the past in the form of an index to monitor the depth of operative anaesthesia. They also point to the fact that there have been no published attempts to correlate these findings with a similar application relating to the degree of sedation in an ITU. The trial comprised over 160 EEG recordings from 40 patients. All of the patients were sedated with either midazolam alone or a combination of midazolam and morphine.  

The authors set out to attempt to try to correlate any of the possible measurable parameters of the EEG with two of the most well established clinical sedation scales – the Ramsay and Comfort scales

The authors attempted to correlate which of the parameters were directly related with either too light a state of sedation (Ramsay 1 or 2) or too deep a state of sedation (Ramsay 5 or 6). The paper presents, in great detail, the analysis of the parameters and their correlation (or otherwise) with the level of sedation achieved. For our purposes here we will observe that two of the parameters (called ratio10 and SEF 95) were closely related with the level of sedation, and the relationship was marginally better if the two results were added together.  

The results were found to be highly indicative of the degree of sedation in any one individual patient but there was a large degree of interindividual variability. It was also described that the authors found that bispectral evaluation improved the sensitivity over simple spectral analysis. It follows from this that the technique can be used to great effect once it has been calibrated to the individual patient but it is unlikely that one all-encompassing index could be achieved to apply to all patients.

Another paper (Ely et al 2003) considered how to tackle the problem of assessing the depth of sedation in the ITU patient. At present there are a number of different methods (we have already described some) and a number of different scales that can be used to assess sedative levels. The practical difficulty is that it is difficult to interpret the results of one scale in relation to another. The Ely paper set about a direct comparison between several of the commonly used scales and a common measuring point – the Richmond Agitation – Sedation Scale (RASS). The methods and scales compared included:

RASS, Glasgow Coma Scale (GCS), and Ramsay Scale (RS); validity of the RASS correlated with reference standard ratings, assessments of content of consciousness, GCS scores, doses of sedatives and analgesics, and .bispectral electroencephalography.

The study was designed as a prospective cohort study of over 300 patients. The design was quite ingenious as each patient was independently assessed by two nurses, each using one particular assessment tool. In some cases the nurses assessed the same patient using the same tool independently (but blinded to each other) in order to assess interrater reliability.

Again, the statistical analysis presented in the paper is quite formidable and not appropriate to present here but the important overall conclusions were that the RASS gave the best degree of correlation with all of the available indices. The authors felt able to state, at the end of their paper:

The RASS demonstrated excellent interrater reliability and criterion, construct, and face validity. This is the first sedation scale to be validated for its ability to detect changes in sedation status over consecutive days of ICU care, against constructs of level of consciousness and delirium, and correlated with the administered dose of sedative and analgesic medications.

The argument is examined further, in a slightly different direction in the De Wit (et al 2003) paper. These authors performed a similar exercise to the Ely team (above) insofar as they compared the assessments of sedation level in patients who were receiving sedatives both by continuous infusion and also by other (intermittent) means. They looked at the Sedation-Agitation scale and the bispectral index as evaluations of the sedative level. The cohort was only 19 patients and they analysed 80 assessments on those 19 patients. As a general finding, the authors point out that those patients receiving continuous sedation were more deeply sedated than those who received bolus sedation. This is actually the only really practically useful finding in the paper as the general conclusions that the authors reach are: 

Objective and subjective assessments of sedation are highly correlated. Use of continuous infusions is associated with deeper levels of sedation, and patients receiving continuous infusions are more likely to be oversedated. Sedation therapy should be guided by subjective or objective assessment.

Which, in reality, does little more than to simply confirm the results of many other published works on the subject.A more recent paper by Watson & Kane-Gill (2004) also covers effectively the same ground and does not provide any further help in the assessment of the validity of the various scales

If we consider the relevance of the agents used, there are a number of excellent papers on the subject.

There are clearly a number of sedatives that are commonly used in the ITU setting for the sedation of the ventilated patient. They obviously all have the same macro-effect, but are all subtly different in their mode of operation and their side effect profiles. Their use is determined largely by the personal preference of the lead clinician and his experience together with any protocols that may be in force in any one particular ITU.

In our consideration of the length of time patients spend on mechanical ventilation, we should also properly consider the effect that the choice of the sedative agent has on the overall clinical outcome.

To that end we can start with a very recent paper by Arroliga (et al 2005) which considered the effects of sedatives and neuro-muscular blockers (NMBs) on patients who were on mechanical ventilators.

In effect, this study was a meta-analysis of the use of these agents as the entry cohort was enormous (over 5,000 patients).

The first (and probably self evident) result that the study draws is the fact that those patients who were sedated for their mechanical ventilation spent a statistically significantly longer time on both ventilation and weaning periods than those patients who did not receive sedation. They also spent longer in the ITU than those who did not

Care of the Dying Patient – Dissertation Sample

One of the most difficult decisions for a patient to have to make, or to be made for a patient, is about whether attempts should be made to prolong life once death has become imminent and inevitable. There are many situations in a medical setting where the patient may decide that it is better not to stop or hold up the inevitable processes of dying as it would only cause further suffering with no benefit.

Many factors, however, are important in this decision, primarily the interests of the patient themselves, but also input and actions of the medical team, including the nursing staff. There are a number of difficult situations which raise important legal issues about the rights and duties of the patient and the nurse when the patient's death is imminent or unavoidable.

While euthanasia is still very contentious it has perhaps become more accepted over the last decade with the debate carried out in ever more public forums. Another important factor is the improvement in the medical technologies and the concomitant abilities of medical staff to keep those patients who, in the past would have died, alive for longer. Research carried out by Asch (1996) surveyed 1,600 critical care nurses in the US to ask them if they had received requests to assist in euthanasia or suicide from either doctors, patients, family members or those acting for family members. 17% reported that they had received these requests, while 16% indicated that they had been involved in an assisted suicide or euthanasia.

Further, 4% had hastened a person's death by not actually providing the treatment that was ordered by a doctor. From these kind of figures it can be seen that this sort of situation is an important factor in the professional life of a nurse. While it is often thought that it is the doctor that makes the decision about the ending of life, in practice nurses have a large role to play (Moody, 2003). Nurses will often be involved in observing the request of euthanasia, be actively involved in the decision-making process, carrying out the doctor's orders as well as providing support for the family after the patient has died (Beer, Gastmans & Dierckx de Casterlé, 2004). That is why it is important that there is an understanding of the legal issues involved. In this discussion the legal issues will be briefly reviewed and also some of the relevant cases and the ethical issues are examined to see how these affect the professional practice of a nurse. 

The starting point for this type of discussion is always that it is illegal to cause a person's death, with or without their consent. This principle is effective whether a lethal injection is being given or whether the patient is being assisted in their suicide attempt – the Suicide Act 1961 covers this situation. While this is a clear legal principle, the practical application is somewhat muddied by the fact that an omission to act, in other words a failure to provide treatment, can also result in the patient's death. Does this, then constitute an illegal omission?

The case of Airedale NHS Trust v Bland (1993) considered this question. In this case the patient had been involved in the Hillsborough disaster and had received enormous damage to the brain as a result of being crushed. The damage had left the patient in a persistent vegetative state that, all of the medical opinion available agreed, was not open to any kind of treatment that would improve the patient's condition in any way – and indeed had not done so for three years. The patient's father was of the opinion that his son would not want to be left that way and that it was the right thing to do to remove the life-sustaining equipment.

The physicians and the hospital agreed with the father and sought permission from the court that ventilation, nutrition and hydration could be removed so that the patient could be allowed to die peacefully. The court decided that the main aim of medical care was to benefit the patient and that a large body of medical evidence existed that showed that a patient continuing in a persistent vegetative state was not in their interests or to their benefit. While there had been a duty to provide invasive care and treatment for the patient, the omission of this duty was found to be no longer unlawful. For this reason the court allowed the medical staff to remove the provision of ventilation, nutrition and hydration. The court also decided it is necessary in future cases for an application to be made to the court before life-sustaining intervention is removed.

While this case set an important precedent for how medical staff can deal with patients with an extremely poor prognosis, matters are often less clear-cut. Moody (2003) draws an important distinction between passive and active euthanasia, the former meaning that nothing is done to preserve life and the latter that an action is taken to end life. Moody (2003) contrasts these two types of euthanasia in describing two important recent legal precedents for the two situations. In the case of Ms B v An NHS Trust Hospital (2002), Miss B had become paralysed from the neck down and had requested that her breathing support be removed as she didn't want to carry on living.

Legally, a patient can refuse to have treatment, but only if they are deemed to have sufficient mental capacity to make that decision. This case, therefore, centred around whether the patient had the mental capacity to consent to make a decision about her own life. The test for this that was established in Re C (Adult: Refusal of Medical Treatment) (1994) was that the patient must be able to understand information that is relevant to the treatment, must be able to believe that information and, in addition, must be able to balance the sides of the argument when coming to a decision. This test was further clarified by Re MB (1997) which found that a patient is unlikely to have the required mental capacity if they are unable to retain or comprehend information that is important to the decision and they are unable to weigh it in the balance. It was decided in this case that Ms B did have the mental capacity to understand the decision and the court allowed her request. 

In the second case, R (on the application of) Pretty v DPP (2002), Diane Pretty was suffering from motor neurone disease. This had left her unable to move any muscles in her body voluntarily from the neck down, unable to talk or eat, but her mental capacity was unimpaired. She arranged with her husband that he would assist him in ending her life, and since this is a criminal offence, she sought an immunity that he would not be prosecuted after her death. The decision of the House of Lords was that the Human Rights Act 1998 did not protect the right of self-determination. Diane Pretty appealed to the European Court of Human Rights who agreed with the House of Lords. Where this case differs from Miss B is that it required an active intervention to bring about Diane Pretty's death – something still considered illegal under UK law.

A further important distinction to be made in euthanasia is whether it is voluntary or involuntary. In the case of R v Arthur (1981), John Pearson was born with Down's Syndrome and after his parents decided they did not want to keep him, the doctor prescribed a sedative and 'nursing care only'. The child died shortly afterwards as a result of not having been fed. The doctor was prosecuted for attempted murder but the court found him not guilty. This seems surprising as clearly the action of the doctor in this situation in not feeding the baby caused it to die. In legal terms, a nurse carrying out the orders of a doctor in analogous circumstances is probably being legally negligent and perhaps even at risk of being prosecuted for homicide. 

R v Arthur (1981) raises some important issues about the difference between voluntary and involuntary euthanasia. As Farsides (1992) points out there is often relatively little difference between an act and omission. In many circumstances an omission can be considered an act in any case. Moody (2003) elaborates on this, saying that the emphasis tends to be placed on 'allowing someone to die' because there is generally a feeling that this is morally acceptable, whereas an act of causing someone to die is seen as reprehensible. As can be seen, the boundaries between an act an omission rapidly break down, but as discussed earlier in the case of Airedale NHS Trust v Bland (1993), the legal position tends to make a strong distinction between acts and omissions and so this has a strong effect on the way euthanasia can take place in an 'acceptable' fashion. Still, in both the Bland case and in R v Arthur (1981) there is a legal reluctance to label the medical decisions made in these kinds of cases as criminal.

While some philosophers argue about whether there is really a difference between active and passive euthanasia and where the dividing line runs, the more important issues are practical ones. As is frequently the case, a dying patient will be in considerable pain, and in order to alleviate this pain morphine is often prescribed in large doses. As well as dampening the pain, morphine will also depress respiration and so a life will often be shortened by its use. This is not, however, seen as a case of voluntary manslaughter as the main intention of administering the morphine is not to shorten life, but to alleviate suffering.

This has also been backed up by the case law. Quill (1997) explains that this is often called the doctrine of double effect. By focussing on the intention of the doctor or nurse in the particular situation, it aims to differentiate between actions that are allowed and those that are not. The main problem with this doctrine is, as Quill (1997) points out, that it is very difficult to determine the intention of the healthcare professional – have they foreseen an outcome, or did they intend a particular outcome? The argument ends up as one purely of semantics. 

There are a number of ethical issues underlying the difficult area of euthanasia that try to lay down principles that can guide treatment. One of the most basic which has previously been discussed in a legal context, is the idea of withholding or withdrawing treatment. Bound up tightly in this is the concept of medical futility. This refers to the idea that, because of advances in technology, there is a tendency to use the technology simply because it is available, rather than because it is for the good of the patient. A very common example is in resuscitation of patients – CPR. It is now normal for a patient or their family to decide that they do not want to be resuscitated in the event of heart failure: a DNR order.

The General Medical Council along with the Royal College of Nursing have produced joint guidelines on CPR. These emphasise the importance of taking into the account the legal issues inherent in the Human Rights Act 1998, including the right to life (Article 2), the right to be free from inhuman or degrading treatment (Article 8). It also underlines that neither a patient nor their family can insist on treatment that is deemed inappropriate. Moreover, a patient has the right to refuse CPR, or CPR may not be administered if the patient is in the terminal phases of illness. At all times though, the decision must be discussed with the patient and their family unless they specifically request otherwise – the nurse, in the role of patient-advocate, may take on part of this responsibility.  

Another oft-mentioned distinction is the difference between ordinary and extraordinary forms of treatment. Edge & Groves (1999) explain that ordinary treatments are those that can be used without excessive pain, expense or other inconvenience and have a reasonable hope of benefit. Extraordinary means are those cannot be obtained under the aforementioned terms and also do not offer reasonable hope of benefit. Critics sometimes argue that this formulation does not take into account the changing nature of medical technology as what perhaps, was once an extraordinary treatment, can soon become an ordinary treatment.

Writers on bioethics, according to Dickenson (2000), have tended to find major problems with the ethical doctrines just described and have mostly abandoned them. By the same token writers on legal issues argue that many of the distinctions are simply unsustainable and a more honest and realistic approach to the end of life is required (Otlowski, 1997). But for a nurse practitioner it is important to know the attitudes of current professionals as this will form the climate in their workplace. It seems from the work done by Dickenson (2000) in surveying doctors and nurses in the US and UK on these issues, that these kind of doctrines are still important in decision-making.

For example in the survey 69% of nurses in the UK agreed that the distinction between extraordinary and ordinary treatments was useful. The survey also asked what it was that caused the most disagreements among staff. Nurses in the UK reported that it was decisions over the medical futility of treatment that caused the most friction, with 60% agreeing. The doctrine that nurses in the UK most strongly believed in was the doctrine of double effect, with only 3% disagreeing with it. It appears from this kind of survey information that, while there are some inconsistencies inherent in the use of some of these ethical ideas, they nevertheless have some utility in a nurse's professional practice. 

Looking to the future, calls have come from many areas for the legalisation of euthanasia, according to Banaszak (1999) between 79% and 88% of the UK population are in favour of it. In two other European countries, Belgium and the Netherlands, this step has already been taken in 2002 as Bilsen, Vander Stichele, Mortier & Deliens (2004) report. This is in contrast to the position in the UK – while euthanasia may be the practical effect of some treatments ordered by doctors, it is still illegal to actively kill another person in the UK, despite, even, the wishes of the patient concerned. 

As can be seen from this review of the legal and ethical aspects of euthanasia the area is very complicated. There are a number of points that nurses must be aware of and follow in their professional practice in dealing with dying patients. From a legal perspective the only way that a patient can die is if things are seen to be taking their 'natural course'. In other words, nurses and healthcare professionals remain relatively passive and death is allowed to occur without 'active' intervention. Although, as has been seen, this idea is still fraught with difficulties as healthcare professionals rarely remain completely passive as even doing nothing is, in effect, an act. Despite this, there are some established doctrines that are of practical use in dealing with patients at the end of life, that are described as having some utility by current healthcare professionals – these provide the best guiding principles available at present when working within the legal framework. Buy dissertation on any other topic at our site

References

  • Airedale NHS Trust v Bland (1993) 1 All ER 821
  • Asch, D. A. (1996) The role of critical care nurses in euthanasia and assisted suicide. New England Journal of Medicine, 23 334(21) 1374-9
  • Banaszak, A. (1999) The prospects for nursing if euthanasia is legalised. Nursing Standard. 13, 45, 38-40.
  • Beer, T.,  Gastmans, C., Dierckx de Casterlé, B. (2004) Involvement of nurses in euthanasia: a review of the literature. Journal of Medical Ethics, 30 494-498
  • Bilsen, J. J., Vander Stichele, R. H., Mortier, F., Deliens, L. (2004) Involvement of nurses in physician-assisted dying. Journal of Advanced Nursing, 47(6) 583-91
  • Dickensen, D. L. (2000) Are medical ethicists out of touch? Practitioner attitudes in the US and UK towards decisions at the end of life. Journal of Medical Ethics, 26(4) 254-60.
  • Edge, R. S., Groves, R. J. (1999) The Ethics of Health Care. A Guide for Clinical Practice. 2nd edn. Albany: Delmar Publishers.
  • Farsides, C. (1992) Active and passive euthanasia: is there a distinction? Care of the Critically Ill. 8, 3, 126-128.
  • Moody, J. (2003) Euthanasia: a need for reform. Nursing Standard. 17, 25, 40-44.
  • Otlowski M (1997) Voluntary Euthanasia and the Common Law. Oxford, Oxford University Press.
  • R v Arthur (1981) 12 BMLR 1
  • Re C (Adult: Refusal of Medical Treatment) (1994) 1 All ER 819
  • Re MB (1997) 2 FLR 426
  • R (on the application of) Pretty v DPP (2002) 1 All ER 1
  • Quill, T. (1997) The rule of double effect: a critique of its role in end-of-life decision making. The New England Journal of Medicine. 337, 24, 1768-1771.

Tuberculosis – Dissertation Sample

Can directly observed therapy prevent the re-emergence of tuberculosis in northern England?

The Department of Health reports that tuberculosis cases have increased by 25% in the last 10 years with 6500 cases reported each year.

Tuberculosis or TB is an infectious disease and is caused by a kind of bacteria known as Mycobacterium tuberculosis better known as the tubercle bacillus. TB is typically a disease of the lungs but can affect other parts of the body.

The disease spreads from an infected person through coughing, sneezing and some amount of prolonged close contact with another person. The disease however is not highly contagious and it may even take some years for an infected person to develop the disease completely. TB can be cured if special antibiotics for the disease are taken for a course of 6 months. The most effective method of controlling the spread of the disease is by identification of the TB victims who already have the disease, providing them with proper treatment to cure the infection and to prevent the disease from spreading any further (DH, 2005).

Although under some medical control, TB is still a massive clinical problem. Cases of tuberculosis in England were high in the 1960s and 70s although there was a progressive decline in the number of tuberculosis cases until the mid 1980s with a rise in cases of TB again from 1990s.

In this article we discuss the problems associated with the disease, signs, symptoms and cases highlighting special case studies of tuberculosis with England. We focus our discussion on tuberculosis cases as seen in Northern England and analyze the incidence of TB from the perspective of incidence rate. If there is a re-emergence of tuberculosis as apparent from health reports and clinical studies, we would seek to analyze why this has been the case and the relevance of a rise in the number of cases of tuberculosis in Northern England. In our analysis we will provide several studies on tuberculosis, its control methods and the DH initiatives, action plan and measures to tackle the problem.

We also discuss in depth certain case studies to evaluate whether directly observed therapy can be used more effectively than conventional and self administered therapy for prevention and treatment of tuberculosis.

Chapter 2 – Background and Literature Search:

According to the DH fact file giving statistical data and prevalence of tuberculosis in England, (Source. DH, 2004)

• Tuberculosis (TB) is a serious, but treatable, infectious disease

• TB in England increased by 25 per cent over the last ten years and is still rising; over 1700 more cases occur each year than in 1987 when TB was at its lowest. The disease has thus been recorded at its lowest incidence rate in 1987 and in the past decade or so there has been a drastic re-emergence of TB in England.

• 6638 people were newly diagnosed with TB in England in the year 2002. That is 13 for every 100,000 people in our population – this is fewer than some countries, but more than several other western European countries where TB rates in 2001 ranged from 5 to 44 per 100,000 population

• About as many people in England develop TB each year as now become infected with HIV. A relation between TB and HIV can be established as occurrence of HIV of infection can lead to increased vulnerability to TB.

• Every year around 350 people in England die from TB

• Most TB in England occurs among people who live in inner cities. Two out of every five cases are in London. The disease has doubled in London in the last ten years and a few London boroughs now have TB rates comparable with some developing countries

• People are at higher risk of TB if they have lived in parts of the world where TB is more common. The disease follows patterns of migration and is therefore more common in certain ethnic groups, especially if they were born abroad:

in England, around seven out of every ten people with TB come from an ethnic

minority population group

nearly two thirds of our TB patients were born abroad

about half of the TB patients who were born abroad are diagnosed with the disease within five years of first entering the UK

HIV infection weakens a person’s immunity to TB. In England, this overlap is still relatively small compared to other parts of the world, but at least three per cent of people with TB are estimated to be HIV positive (this rate is even higher in London)

• TB in cattle – bovine TB – is increasing in England. Very few human cases are due to this bovine form, but continued vigilance is required to prevent transmission from cattle to human

• TB can be controlled by:

? promptly recognizing and treating people with the disease

> ensuring that people with the disease complete their treatment. Lapses on treatment not only fails to cure the disease but contributes to the growth of drug resistance and spread of the disease

The following table gives the TB risk from contact with an infected person and the duration of exposure is also important–  

Source: New England Journal of Medicine 2003; 348:125666, DH, 2004.

According to Tandon et al (2002) tuberculosis is a major public health problem in any developing country and is made worse by poor adherence to treatment schedules and frequent interruption in the method of treatment such as taking proper medication. Tandon et al emphasize that treatment of tuberculosis requires following a strict schedule and to maintain a clinical treatment discipline in order to eradicate the active and passive mycobacteria and to cure the disease completely.

In the analysis of the manifestation of TB and its occurrence we discuss all these issues in greater detail and give evidential studies to prove out point and the summary provided by the Department of Health. TB has been found to be more common among the ethnic minority group, within London and lapses in treatment or diagnosis of TB lead to drug resistance that can impede treatment. Considering the causes and factors that lead to TB and the barriers identified in the treatment of TB are discussed along with a critical examination of the effectiveness of the directly observed therapy or DOT in tuberculosis treatment. We examine this in the context of the reemergence of TB in recent years and how this relates to special contextual situations like Northern England.

For our purposes we conduct a literature search on the causes, factors and manifestation of TB, the relationship of TB to HIV and multi-drug resistance, how this affects treatment and how TB could be treated effectively and what actions should be taken to control the spread of the infection. The relevance of the directly observed therapy as against conventional self-administered therapy is discussed in terms of the cost effectiveness and duration of treatment using different approaches. We search Medline, Science direct and other medical and nursing journal databases and used search terms as ‘direct observed therapy’; ‘tuberculosis’, or ‘tuberculosis England’. We provide an analysis of our findings below and study the relevance of the Department of Health Action plan in the context of these evidential clinical studies.

Chapter 3 – Tuberculosis – Evidential Studies:

In this section we take our discussion a step forward by identifying the reasons, causes of tuberculosis and how it manifests itself. The implications of HIV infection and multi-drug resistance in tuberculosis are discussed along with the role of vitamin D in the onset of the disease. The differences between adult and childhood tuberculosis and the importance of controlling the spread of the disease are also discussed along with the factors delineated by the Department of Health as contributing to transmission of the infection even in healthcare facilities.

The Department of Health has clearly differentiated between HIV (Human immunodeficiency virus) related tuberculosis and Drug resistant tuberculosis. Although these are separate factors that affect infected persons, special care should be taken to prevent any interaction between persons with HIV and persons suffering from tuberculosis. HIV infected individuals are more vulnerable to such infectious diseases and the transfer of disease from Drug resistant to HIV resistant patients can be common. In fact in many countries people affected with HIV have been found to be affected with tuberculosis as well; tuberculosis is the most common co-infection with HIV (Department of Health, 1998) and develops more rapidly in HIV infected patients. In 1991 Horner and Moss reported that persons with AIDS or PWAs are 100 times more likely to develop tuberculosis than the general population. TB incidence rates in the US are high among drug users and range from 4-21% and in London 25% of AIDS patients have been found to have tuberculosis as well.

In AIDS patients there is especially a reactivation of a latent tuberculosis infection due to failure of the immune system and TB develops through reactivation or exogenous primary infection. Risks are high for HIV sero-negative patients and TB manifests itself in early stages of HIV infection and the symptoms of TB and HIV patients include fever, weight loss, malaise, cough accompanied with labored/difficult breathing, an atypical chest radiograph, and extra-pulmonary TB. Delays in diagnosis and treatment are common and many sputum samples may not immediately test positive for Mycobacterium tuberculosis so treatment should begin immediately.

Some studies have demonstrated that isoniazid prophylaxis substantially decreases the incidence of TB in HIV sero-positive patients in Zambia. Horner and Moss (as cited by DH, 2004) report that there is no conclusive evidence of the harm or even effectiveness of the BCG vaccine in HIV children and adults although BCG has been widely regarded as capable of preventing the possibility of tuberculosis.

HIV infection by itself does not cause tuberculosis but it makes a person vulnerable and increases the risk of acquiring tuberculosis almost by 100 times, if the person is exposed to tuberculosis bacteria. Thus compared with an immuno-competent person, a person with immunodeficiency due to lack of resistance and immunizing capabilities fall prey to tuberculosis easily. This is also true in case of drug resistant tuberculosis, which make individuals more vulnerable than persons affected by drug sensitive tuberculosis.

Drug resistant tuberculosis is common in many developed countries as well when it was discovered through chemotherapy that resistant strains of tuberculosis bacteria emerged rapidly despite treatment and instead of one drug, a combination of several drugs had to be used for treatment of tuberculosis. Drug resistance in tuberculosis is the result of poor treatment and inadequate control measures. The DH states that in 1996, within England and Wales, 6.1% of initial isolates of M. tuberculosis were resistant to the drug isoniazid and 1.8% were resistant to rifampicin; 1.6% were multiple drug-resistant. According to the Department of Health, this statistic represents a small but significant increase in drug resistant tuberculosis since 1993. Drug resistant diseases are more difficult to treat and pose greater challenges and threats than drug sensitive diseases. Prevention of the emergence of drug resistant strains of tuberculosis is one of the stated aims of the Department of Health national tuberculosis policy.

Conaty et al (2004) distinguished between primary and secondary drug resistance. They defined primary drug resistance as that which is transmitted and secondary drug resistance as that which develops during the course of treatment. The risk factors for each type of resistance were evaluated. Patients in England and Wales with isoniazid- and multidrug-resistant tuberculosis were compared. All the patients studied between 1993-1994 and 1998-2000 had fully sensitive tuberculosis and were examined separately based on the criterion of whether they has previous attacks of the diseases.

The study indicated that patients with previous tuberculosis smear positivity in the tests and a combination of this with less than 5 years of arrival in the UK were strongly associated multidrug resistance and isoniazid resistance. In patients with no previous tuberculosis infection or an existing HIV infection, foreign birth were found to be risk factors for multidrug resistance. For people of non-white ethnicity, HIV infection was instrumental for isoniazid resistance. Thus risk factors for each type of resistance seem to differ and elevated risks have been found with residence in London, HIV positivity and ethnicity if there were no records of previous tuberculosis. Thus presence of previous tuberculosis, and HIV infection increases the prevalence of multidrug resistant tuberculosis in a certain ethnic group.

In one of the clear evidential studies on tuberculosis Jenkins (2005) examined rifampicin resistance in tuberculosis outbreak in a London hospital. In this study, Mycobacterium tuberculosis isolates were cultured from 6 patients who were associated with isoniazid-resistant M. tuberculosis outbreak and showed symptoms of such. This strain of mycobacterium was also found to acquire rifampicin resistance. The rpoB gene sequence revealed that this resistance can e traced to some rare mutations in each of the isolates. Three isolates were found to have a mutation outside the rifampicin resistance-determining region (Jenkins, 2005).

This brings us to the question of detailed analysis of isolates of Mycobacterium, their origins and properties. Dale et al (2005) used isolates of Mycobacterium tuberculosis from a population-based study in London and these isolates were assigned 12 groups, superfamilies or sfams. Analysis of patient data suggested that there are clear geographical associations in the distribution of these sfams in the population. For example, isolates obtained from Europe born patients were from different sfams than those who were born elsewhere showing that possibilities of transmission of tuberculosis from immigrant communities into endogenous population is usually rare. Yet certain multivariate and statistical analysis showed that some sfams were found independent of the country of birth or ethnicity of individuals and were significantly associated with pulmonary rather than extrapulmonary diseases with sputum smear negativity. This suggested that the properties of the infecting organism play a role in the nature and manifestation of the disease process.

Vitamin D deficiencies have also been associated with tuberculosis and in a study by Ustianowski et al (2005) an analysis has been done on the associations and prevalence of vitamin D deficiency with tuberculosis which is high in foreign born persons resident in developed countries. This study forms a helpful guide and helps determine the associations and incidence of vitamin D deficiency in TB patients at an infectious disease unit in an England hospital.

Vitamin D is important in the host as a defense against TB and any deficiency of the vitamin can become an acquired risk factor for the disease. For the purposes of the study, 210 patients diagnosed with TB had their plasma vitamin D levels measured routinely. Prevalence of vitamin D deficiency, and its relationship to ethnic origin, religion, site of TB, sex, age, duration of stay in the UK and the months of estimation, and TB diagnosis were determined. Among the patients 76% were deficient but many had undetectable levels of deficiency.

Asians were found to have low levels of the vitamin and thus although there has been significant association between the vitamin deficiency and the ethnicity or birth origin no differences were found between the site of TB and the duration of residence in the UK. The authors concluded that Vitamin D deficiency commonly associates with TB among all ethnic groups apart from White Europeans and South East Asians. Lack of sunlight exposure and an exclusively vegetarian diet are factors that can lead to this deficiency. The factors identified by the Department of Health as having contributed to the transmission of the infection in HIV settings or healthcare facilities are:

• delay in considering the diagnosis of tuberculosis;

• delay in confirming the diagnosis;

• delay in considering and establishing drug-resistance;

• delay in starting treatment;

• treatment with inappropriate drugs (and dosages);

• default from treatment;

• lapses in isolation (eg inappropriate accommodation taking into account the infectiousness or likely infectiousness of the case, the immune status of the surrounding patients/contacts, and any suspected or confirmed drug resistance; the patient wandering from an isolation room into other patient areas; inadequate or incorrect ventilation of isolation rooms);

• performance of aerosol-generating procedures on a patient with (sometimes unsuspected) pulmonary tuberculosis in an open ward containing immunocompromised patients. (DH, 1998)

The Department of Health has also identified that lapse on the part of medical professionals and human fallibility can lead to rapid spread of tuberculosis. The primary elements in the control of tuberculosis are

1. Prompt recognition, confirmation and treatment of cases

2. Using certain infection control measures to reduce airborne spread of infection from infectious patients to others.

3. A Team approach for effective control and decision making

4. Establishing close working relationships, between health care workers, and between all involved in the care of an individual patient, in particular between the TB physician, HIV physician, microbiologist, hospital infection control doctor and team, TB nurse specialist and the consultant in communicable disease control who has overall responsibility for tuberculosis control.

Childhood cases of tuberculosis should be specifically studied as 40% of all cases of tuberculosis are reported in children. The control of TB is an important health agenda and is an issue of global importance although no complete control of the disease can either be promised or expected at present. Adult tuberculosis has been thought to be related to childhood tuberculosis and it is also recognized that the infection acquired during childhood promotes reactivation of adult disease maintaining the chain of transmission. This proves that childhood tuberculosis needs equal or more attention for effective control. Treatment procedures include early diagnosis and ensuring treatment compliance. Inaccessible sites for bacteriological confirmation and small number of bacilli make diagnoses of childhood tuberculosis a difficult process and for detection circumstantial evidence is used as the basis (Amdekar, 2005).  Buy dissertation on any other topic at our site

Clinical manifestations of tuberculosis in childhood are based on immune responses of the host and the degree of virulence of the tubercle bacilli and no typical manifestations or clinical presentations can be delineated. Thus many children remain undiagnosed and consequently untreated. The conventional test of tuberculin and radiology tests and other modern tests may have limitations and may not be fully dependable. A failure of tuberculosis control program is invariably related to drug resistance and results to poor patient treatment compliance and recovery. So the direct observed treatment or DOTS has been recommended unanimously for the treatment of tuberculosis (Amdekar, 2005). However DOTS is used in less than 40% of tuberculosis cases and misconceptions on TB control and treatment threatens to undermine success of a TB control program which is essentially a clinical management problem. Amdekar (2005) suggests that greater accountability of governments, donors and providers is essential.

The Experimental Musician – Dissertation Sample

Norman Lebrecht (1992: introduction) cites the beginning of the twentieth century as the watershed in the history of musical evolution in the West. 

“The history of music both ended and began in a Milan hotel room at 3 a.m. on 27 January 1901 when Giuseppe Verdi drew his final breath. Verdi was the last of the titans whom music had brought forth with miraculous fertility ever since the near simultaneous birth in 1685 of Johann Sebastian Bach and Georg Frideric Handel. When one creative force expired another was ready and waiting to pick up the baton of progress.”

Lebrecht’s hypothesis is well founded. Although none of the celebrated European composers and musicians working during the period from the beginning of the Reformation to the end of the Belle Epoch worked to a set, rigid template of creation, a discernible shift in focus is detectable after 1900 that lends the term ‘experimental musician’ a certain sense of relevance and piquancy. Gradually, music shifted away from a narrative epicentre towards a rich and diverse musical landscape that necessitated a move away from performance art to the embracement of the sonic possibilities of studio recording. Music, furthermore, became intertwined with social movements as the twentieth century progressed with the young enfranchised members of society influencing the course of music as much as the traditional ruling elite who had largely controlled composition throughout the nineteenth century. In this sense, musicians working during certain defining moments of the twentieth century were bound to be classed as experimental as technology and society offered options of creation and composition that had not hitherto existed. 

The essay will look at a range of musicians who each played their part in the reconstruction of compositional music during the twentieth century – from Erik Statie to Brian Eno, from Terry Riley to Karl Heinz Stockhausen – encompassing avant garde, electronic, minimalist and classical music. The discussion will attempt to follow a format that will, in turn, examine the creation of experimental music (analysing techniques and innovations used to create the optimum studio and live performance by individual artists and groups), the exploration and formulation of the above techniques in form as well as the appraisal and critiques of the artists cited concerning their own work and their vision of how music might evolve. A conclusion will be sought so as to dove tail the above categories into a synthesis of opinion that attempts to show the essential continuities and differences between each of the experimental musicians within the study with the overall aim that the term ‘experimental’ might be better defined as a result.   

Attempting to quantify the reasons why a composer composes is a nigh on impossible task that is tantamount to finding a formula for the motivation behind Picasso’s art. To state that every musician wishes to create a piece of aesthetically beautiful music is pedantic; and to attempt to find a blanket answer as to why all musicians compose is likewise futile. The common thread, however, between each of the artists that will be featured within this study is the inherent inquisition of each musician that leads him to wish to re configure the contemporary boundaries of popular music. None of the musicians, such as John Cage or Karl Heinz Stockhausen, were satisfied with merely re producing the work of either their influences or their peers; each felt that there was more ground to explore within their own space and time. This fundamental common trait acted as the trigger behind all of the experimental musicians of the twentieth century.

As in every other profession, musicians are a varied breed. Some, such as Alvin Lucier and Terry Riley, appear to be born ‘gifted’ and feel the music as an instinctive impulse while others are forced to labour over their constructions, taking their starting point and moulding it, over time, into an experimental piece of music. Steve Reich (1974:11) appears to be just such a composer, where, in the following extract, he describes a gradual process of creation as the antithesis to the drug induced rock music that was prevalent at the time of his Writings about Music.  

“The distinctive thing about musical processes is that they determine all the note to note details and the over all form simultaneously. One can’t improvise in a musical process – the concepts are mutually exclusive. While performing and listening to gradual musical processes one can participate in a particular liberating and interpersonal kind of ritual. Focusing in on the musical process makes possible that shift of attention away from he and she and you and me outwards towards it.”

It is clear from this quotation that Reich sees music as a separate entity from man, humanity and therefore the instigator. Clearly, however, his personal philosophy and ideology influences how he perceives music. Satie, on the other hand, was deeply influenced by the people he met, constituting life influencing art as opposed to the other way around. Notoriously secluded and introverted (indeed, even celibate), Satie’s personal life was arid yet his acquaintances and surroundings (Paris) were inherently experimental. Satie was especially influenced by the ‘outsider’ artists and contemporaries that he met, such as the Surrealists in the early 1920’s and artistic friends like Debussy and Cocteau. Kenneth Thompson (1973:456) underlines the power of the external influences upon Satie’s musical career, in particular the piece which subsequently singled him out as an experimental musician of the highest calibre and posthumously made him a hero of experimental music.  Buy dissertation on any other topic at our site

“It was Cocteau who arranged that Satie should collaborate with himself, Massine and Picasso on the Diaghilev ballet, Parade, the stormy premier of which in 1917 turned the composer into a father figure of the New Music in Paris, especially of those musicians (Milhaud and Poulenc among them) who later came to be known as Le Six.”

Likewise Karl Heinz Stockhausen, who produced the first published score of electronic music with his Electronic Studies (1953 54), can be seen as a product of the geo political environment in which his creative impulse operated. West Germany post 1945 was a particularly innovative artistic environment (as was the Weimar Republic before it) to ply an experimental trade. The discernibly schizophrenic character of West Germany post 1945 – split asunder and divided by ideology, bricks and mortar – can be seen to have influenced Stockhausen, who was particularly interested in creating a ‘unity of opposites’ in his compositions, a radical concept that is explained by Joseph Machlis (1980:488).

“He [Stockhausen] expanded the concept of the series to include not only pitch but also rhythm, timbre, dynamics and density, in this way achieving total serialisation. At the same time he was fascinated by the possibility of combining total control with total freedom, thereby reconciling two seemingly irreconcilable goals into a higher synthesis.”

It is thus important to note the diverse influences that separate as well as unify each experimental composer within this study. Overall, it would be fair to state that the differences would be more pronounced than the similarities. One common factor, however, is that each composer would doubtlessly state would be that they did not choose to become ‘experimental’ musicians but rather that music attached itself to them. That is a common creative fact of all artists. 

However, in terms of the innovative techniques used by composers to experiment fully across their particular musical spectrum, experimentation was necessarily married to technology. The tools available to Satie, for example, were markedly different from those at Eno’s disposal. This, essentially, meant that a composer was bound by science as well as time. Technology influenced innovation which greatly influenced the output of music. As studio innovation evolved so did the productivity of experimental music, a point that Salzman (1967:161:162) explains.

“The vast improvement in amplification and speaker systems and the widespread availability of a faithful, durable and easily handled storage device – magnetic tape – made it possible for the composer to establish the fixed and final form of his creation by working directly on the medium and without the aid of an interpreter.” 

The introduction of the magnetic tape, in particular, constituted a revolution within the evolution of experimental music because it gave the creator the possibility of mastering time as well as pitch and tone. Whereas composers before the introduction of tape had to rely on recording within their own space and time, musicians after this time were able to make multiple copies of sound so as to introduce loops and cyclic patterns that could be further expanded upon or synthesised. For the listeners too (surely an equally significant ingredient to the creative process as the writers), technology and tape meant that a piece of music could be listened to over and again whereas previously music was confined to a singular performance given by the artist.

For the creators of experimental music, the tape and home listening devices meant that the subtle intonations of sound and scales could be detected by the untrained ear over many plays of the same piece of music. Without doubt, much of the intricacies of nineteenth century composers went unnoticed by the audience who had only one opportunity to listen to the score. With this creative freedom came experimentalism. Indeed, such was the effect of the advent of post modernity with regards to musical composition that ambience itself could be manipulated on tape to influence the tones, time signatures and poly rhythms. As John Cage (1973:67 68) ascertains, the boundaries of music have been stretched to an almost limitless expanse as a consequence.

“In the field of music we often hear that everything is possible; (for instance) that with electronic means one may employ any sound (any frequency, any amplitude, any timbre, any duration); that there are no limits to possibility. This is technically, nowadays, theoretically possible and in practical terms is often felt to be impossible only because of the absence of mechanical aids which, nevertheless, could be provided if the society felt the urgency of musical advance.”

Paradoxically, the limitless possibilities of ambience and studio production induced a minimalist backlash in the post war experimental musicians of the West, particularly those in the USA where the 1960’s avant garde and minimalist art scene as characterised by Eva Hesse and Andy Warhol was reproduced in music. As the title of his 1973 book suggests, John Cage, for instance, was interested in silence as a musical medium, stripping bare the technological advances of modernity to re introduce a sense of perspective and a new artistic relationship with nature in place of an excess of sound and studio technique. Cage’s method of composition is an amalgamation of chance and the residue of ambient noise with the sum being a wholly unpredictable sound that has even influenced mainstream pop music. 

Stockhausen was likewise an exponent of ‘intuitive’ music, attempting to get to grips with the catalyst for music in the first place, which required the composer to look inwards rather than concentrating on the external output of creation. Steve Reich, moreover, travelled to Africa and was tutored by African drummers where he taped his lessons and used the sound on an eight track studio in New York, playing with time if not the pitch of the music and tribal chants. It is important to note that the technological changes prevalent during the second half of the twentieth century were responsible for the inherent artistic impulse not to conform to taste and type. Experimental music, by definition, has to exist outside of the parameters of the dominant music of the day otherwise it would cease to be considered experimental.  

Yet just as Cage and Stockhausen opted to turn their backs on the possibilities raised by the introduction of the magnetic tape others embraced the new relationship between time and music that the tape and tape loops offered the post war experimental musician. Terry Riley, another American pioneer of experimental music, used the studio and tape to maximise the mathematical components of music. Using both the studio and live performance, Riley developed his sound over time, often using a concert performance to iron out imperfections in his music, as was the case with Persian Surgery Dervishes (1971 2). The end result, as Mark Prendergast (2003:103) explains, was a wholly innovative and experimental type of new music.

“Though the piece is at first seemingly static, the relationship between the repeated organ bass on the tape loop and various processing effects on Riley’s own mixer build to an intoxicating sound. Another reason for its success was just intonation.”

Riley’s music was therefore a reproductive experience: constantly mirroring sounds upon the reflection of one another with subtle differences in intonation catching the listener unawares. By embracing the new musical studios, an experimental musician could stretch the possibilities of sonic creation every bit as much as the performers who opted to curtail their use of sound; the one unifying factor is the demands that both types of experimental music make of their listeners. 

Thus far, the musicians cited within the study composed their music from outside of the spectre of commercialisation and the pressures of contractual pop music yet an experimental musician does not necessarily have to operate within such a creatively secluded environment. As already briefly touched upon, a key component of the musical revolution of the twentieth century that permitted Stockhausen, Reich and Riley to play with the very concept of composition was the social revolution that forever altered the landscape of production during the 1960’s. Therefore, it made sense that, within such a liberal social and political context, the work of Cage was soon transmitted into popular music.

The Beatles’ Tomorrow Never Knows, for instance, is seen as a celebratory marriage of experimental music and mass mainstream pop – John Lennon taking the avant garde experiment further with Revolution Number Nine (the White Album) and his solo pieces, The Wedding Album and Two Virgins (1969). Yet these recordings were memorable mostly for the wider audience that experimental music reached as a result of the Beatles’ fame rather than for constituting a totally new form of music.

The arrival of Brian Eno upon the popular music scene however did constitute a shift in focus of experimental music due to the innovative recording techniques that he had harboured and the influence he had on contemporary experimental musicians. Utilising echoes, delays and fluctuating volumes, often by combining and multiplying tape loops, Eno was able to manipulate studio music in a way that other famous producers, such as George Martin and Phil Spector, either could (as was the case with the former) or would (as has been the case with the latter) not match. Brian Eno used ambience as the central actor in the studio where his collaboration with the popular face of experimentalism, David Bowie, on the 1977 album Low has been cited as the genesis and trigger for the late 1980’s and 1990’s intrigue with trance and dance music. Unlike Spector, Eno did not advocate the infamous ‘wall of sound’ studio techniques of the time; rather his technique was a mixture of Cage’s minimalism and Riley’s strict innovation, in the process bequeathing a new musical landscape to a new generation of experimental musicians.    

Eno also worked with former King Crimson guitarist Robert Fripp on Bowie’s 1977 album Heroes, where the two fused Fripp’s Les Paul electric guitar with Eno’s experimental production techniques to great effect on the title track. As Prendergast (2003:339) points out, the lesson of the combination of Eno, Bowie and Fripp was that, in the age of boundless recording techniques, less often means more.

“The slow tempoed electric guitar of Heroes sharded through Eno’s tape treatments for the Bowie song and was a pinnacle of understatement.” 

As Fripp experimented with avant garde guitar techniques, other artists pushed the boundaries of electronic synthesised creativity still further. Alvin Lucier, for instance, expanded his vision of musical performance to encompass seemingly inanimate, non musical objects around him. Lucier has used brain waves in live performances and notations to mark the physical movements of actual performers. 

By the time that Fripp collaborated with The Orb, The Grid and The Future Sound of London in 1994 experimental music had moved over into the mainstream to such an extent that tape looped ambient dance music became a kind of musical Mecca for an entire generation with hubs of activity such as Ibiza, Manchester and Amsterdam thriving off the legacy of all of the above artists, each and every one whose stated aim it was to experiment with the contemporary norm.  

Conclusion

It has been shown that experimental music was and remains tied to technological advances and the casting of the studio as central compositional actor. The day that The Beatles stopped touring in 1966 and vowed only to release recorded material was a watershed in terms of popular music and its ties to formula, but the groundwork had largely been laid by pioneering composers before them. Mahler’s Symphonic Movements began an inquisitive train of thought that has still yet to derail and likely never will for as long as artistic integrity prevails over commercialisation and manufactured music. Modern bands such as Radiohead prove that the influences of electronic, minimalist and experimental music retain their resonance in the twenty first century and these sounds are able to evolve with new media and technology to play a key part in contemporary experimentalism.   

Ultimately, as John Cage (1973:69) concludes there is no set thesis pertaining to experimental music and the people who create it; every artist is inspired and influenced by different genres and aspirations while the time and space in which said artist is operating is even more critical, inherently influencing the sound of the final product. The one certainty he seems offer regarding experimental music is that it is beyond the tangible reach of calculated creative effort.  

“What is the nature of an experimental action? It is simply an action the outcome of which is not foreseen. It is therefore very useful if one has decided that sounds are to come into their own, rather than being exploited to express sentiments or ideas of order… for nothing one does give rise to anything that is preconceived."