Monday, September 30, 2019

Business Decision Mapping Essay

The Shamrock Manufacturing Chicago plant manager, Sean Fitzpatrick is contemplating replacing a large piece of manufacturing equipment. Mr. Fitzpatrick is also inline for a promotion to Shamrocks larger Houston plant within the next year, and is hesitant to make any decisions that will reduce short-run operating income and his performance evaluation. While the prospective replacement equipment promises to reduce cash operating costs, it costs $90,000, as well as the loss on disposal cost of the old equipment, which has not fully depreciated. Prior to making a decision, Mr. Fitzgerald must identify all relevant costs and chose a decision for the best interest of Shamrock (Datar, Rajan, 2013). Analysis The available data to consider in this case is the old machines purchase price ($150,000); the current book value of the old machine ($60,000); the market value of the old machine ($36,000); the cost of the new equipment ($90,000); and the reduction in annual cash operating costs ($32,500). All historical costs are considered irrelevant, as they have already occurred and have no effect on future costs. The only relevant costs that should be considered for this decision are the future cash operating costs, the disposal value of the old machine, and the cost of the new machine that will be deprecated over the next two years. Based on the #1 and #2 worksheets in Appendix A of this document, year one yields an increase in expenditures of $6500, but includes the $24,000 loss of disposal of the old machine, which is irrelevant. The only relevant data is the total two-year costs shown on worksheet #2 that shows a reduction in total relevant cash flow of $11,000. The results of worksheet #1 are not beneficial for Mr. Fitzgerald, but the overall results in year two benefit Shamrock. Based on the #3 worksheet, with a lower new equipment cost ($77,000), year one breaks even, which is irrelevant, and the total two-year reductions in total relevant cash flow are $24,000. Conclusion Based solely on the worksheet information (Appendix A), the company should replace the equipment. All relevant costs located in worksheets #2, and #3 indicate that Shamrock manufacturing will benefit by replacing the machines at either equipment cost. However, worksheet #1 presents a problem for Mr. Fitzgerald as it shows a $6500 increase in the first year expenses, which are irrelevant in the long-run, but may encourage Mr. Fitzgerald not to purchase the new equipment because it may reflect badly on the short-run net operating income of his plant during the evaluation period for his promotion. Worksheet #3 offers a breakeven scenario in the first year and a $24,000 reduction in relevant cash flows in year two, which is the best option for Mr. Fitzgerald and Shamrock, if available. Reference: Datar, S., Rajan, M., (2013). Financial and Managerial accounting, custom edition, Pearson Learning Solutions, Ch. 9 Appendix A Shamrock Manufacturing relevant cash flow analysis Appendix B 5-Step Critical Thinking Decision-Making Process Matrix Step 1: Identify the problem(s) and uncertainties. What exactly is the problem†¦ Sean Fitzpatrick has an opportunity to decrease long-run cash flow by replacing a large piece of plant equipment. The problem is this †¦ Mr. Fitzpatrick is up for a promotion and is concerned that any short-run decreases in operating income will affect his performance evaluation. This is an important problem because†¦ Mr. Fitzpatrick’s decision may be good for the company, but could hurt his career aspirations. The key question(s) that needs to be answered to solve this problem is†¦ What is the best decision for shamrock in the long-run? Step 2: Obtain information. The following information is needed to answer this question†¦ What are the relevant costs that impact the decision to keep or replace the equipment? Based on the #1 and #2 worksheets, what decision would be made in years one and two? Based on the #3 worksheet, would the decision be different for years one and two compared to the initial cost of the new equipment? Some important assumptions I am using in my thinking are†¦ I believe that the best decision for Shamrock is not the best decision for Mr. Fitzpatrick, which creates an ethical dilemma. The points of view relevant to this problem belong to†¦ Sean Fitzpatrick. Note: Remember to view the information you have obtained for potential bias. This is from the perspective of your own bias to the research and the bias of the authors who compiled the data and the research you gathered. In other words, do not discount the importance of other’s data because of your own bias(is). Step 3: Make predictions about the future. If this problem gets solved, some important implications are†¦ Long-run relevant cash flows will be reduced, and operating income will increase. If this problem does not get solved, some important implications are†¦ An opportunity to decrease relevant cash flows will be missed. The potential alternative solutions to solve the problem are†¦ Keep the status quo or make a tough decision that will benefit Shamrock in the long-run. Note: if the problem is one-dimensional, there may be just one correct solution. Step 4: Make decisions by choosing among alternatives. What is the best solution and why†¦ By the new equipment, because it decreases long-run relevant cash flows. Step 5: Implement the decision, evaluate performance, and learn. In business, the fifth step in the decision making process is implementation. In the MBA program, most times you will end with Step 4 since you will not have the opportunity to implement. You may be asked to develop an implementation plan and recommend how you will evaluate performance in some assignments.

Sunday, September 29, 2019

Recording, analysing and using HR information Essay

The new HR Director has requested a report that shows a review of the organisation’s approach to collecting, storing, and using HR data. The findings will explain reasons why the organisation needs to collect HR data. The types of data that is collected within the organisation and how each supports HR practices. A description of the methods of storing records and the benefits of each. A statement of two essential items of UK legislation relating to the recording, storage, and accessibility of HR data. 1) Two reasons why the organisation needs to collect HR data It is essential for organisations to keep up to date and accurate records to ensure efficient forward planning, remain competitive and provide a good service to their employees and customers. There are number of reasons why an organisation needs to collect HR data, these could be to: Satisfy legal requirements †¢ provide relevant information in decision making and for consultation requirements, future development/planning recording contractual arrangements and agreements keep employee contact details. †¢ The organisation needs to be able to provide information in the event of a claim being made against the organisation. †¢ For due diligence in the event of an organisational transfer Government departments’ including HMRC can demand information from the business on how many people are employed, what they are paid, what they have been paid over a number of years, and how many hours they have worked. The working time regulations and national minimum wage act each require specific records relating to hours of work and pay details. Employment protection rights demand that we keep records to protect ourselves, as employers, from claims that we have discriminated against or unfairly dismissed employees. Health and safety legislation demands that records are kept of accidents, exposure to hazardous substances, what training has been provided, and much more. Employers must be able to demonstrate responsible management of health and safety issues. 2) Two types of data that is collected within the organisation and how each supports HR practices 1. Organisational Development CIPD define organisational development as ‘planned and systematic approach to enabling sustained organisation performance through the involvement of its people’. [1] One of the challenges in the delivery of organisational development work is that it not just what you do, but also the mindset that is brought to bear on the work. Amongst other areas, in practice the HR teamwork with the business development team to develop a performance management system that properly aligns individual and organisational goals (business aims/objectives and individual key roles and key performance indicators). The relationship between organisational development and HR It is the underlying characteristics of organisational development work that helps to see the commonality across the different areas of organisational development and the link to HR. Organisational development work: †¢ contributes to the sustained health and effectiveness of the organisation †¢ is based upon robust diagnosis that uses real data from organisational, behavioral and psychological sources †¢ is planned and systemic in its focus, that is taking account of the whole organisation †¢ practitioners help to create alignment between different activities, projects and initiatives †¢ involves groups of people in the organisation to maximise engagement, ownership and contribution. 2. Measuring and managing Labour turnover Labour turnover is becoming more important as a measure of organisational effectiveness. Keeping records of labour turnover is almost exclusively the responsibility of personnel and HR managers. Employers need to collect both qualitative and quantitative data on ‘leavers’ broke down into the number of resignations, dismissals and the reasons. Also including natural retirements, ill-health retirements, and deaths in service. These are broken down by department/unit, length of service and job/role. To establish the organisations findings against the general labour market, it can benchmark its turnover rates with other organisations. The relationship between labour turnover and HR The most effective ways of controlling and minimising labour turnover is to be able to review, improve, develop, and implement effective changes to: Resourcing and talent planning Pay and Reward management Learning and talent development Absence management †¢ Resources and tools in place to manage workforce engagement and participation 1) A description of two methods of storing records and the benefits of each. HR records encompass a wide range of data relating to individuals working in an organisation, which may be stored in a variety of media, such as computer database or paper files. There and advantages and disadvantages to both media 1. Paper records: A risk analysis needs to focus on the secure storage and the prevention of threats such as fire or theft and that files can only be accessed by relevant personnel. There are legal requirements that employees are required to meet in terms of the length of time that specific documents are to be retained, so thought needs to be given to storage space and all files must be easily retrieved and accessed when required. The business must implement and maintain a good document discipline, i.e. no paper should be left laying around for unauthorised access, and a clean desk at night policy must be mandatory. However, there are some benefits to collecting and retaining paper files. For legal matters, such document may need to be presented that show authenticity of original documents, i.e. hold original signatures etc. Paper files are not susceptible to computer viruses, they are user friendly, and there are benefits to their portability. 2. Computerised records A risk analysis needs to focus on not just password protection but also long-term protection of data. To include the potential major threats of computer failure, viruses, fire and the possibilities and potential sabotage. The business provides each user a back-up facility, anti-virus software and firewalls. The business has a dedicated IT department that maintains and supports all IT systems and users. There are strict polices in place for all users to abide to and any users found in breach of the polices will attract disciplinary action being taken by the business. Computerised records are beneficial because computerised systems allow for greater efficiency in performing specific tasks both more accurately and more rapidly than doing the same task using paper based records. Computerised records are easier to update, compare, analyse and speeding up the provision of information. The system will boost cost benefits through administrative savings. 2) A statement of two essential items of UK legislation relating to the recording, storage, and accessibility of HR data. The Data Protection Act 1998 Data protection concerns safeguarding data and information about living individuals to maintain their privacy and good information management practice. Data protection covers manual records, including paper and all other media as well as those processed by information technology of any kind, i.e. – email etc. Organisations should be committed to ensuring that all relevant personal data that it holds regarding its employees, customers and any other persons that are part of its operations is processed and protected in accordance with the legislation. The organisation can achieve this by upholding and complying with the 8 Data Protection Principles and any such amendments.

Saturday, September 28, 2019

A Business Trip to Chile Essay

Excited about visiting a South American country for the first time, I started my journey to Santiago De Chile from Miami on March 2nd, 2012. To start with, I was skeptical about the quality of a Chile based airline. But, I was amazed by the excellent service provided by LAN airlines. My perception about a Chilean company changed then and there. Also, prior to my flight I doubted whether the officials in the flight will understand English (even though we were assured by the Professor that there wouldn’t be language problems during the travel) and my doubts didn’t fructify. In fact, the quality of the food given to us in the plane set up a high expectation for my one-week long stay at Santiago. Day One After watching the Pirates of the Caribbean – At World’s end, a movie which I have been craving to watch for a long time, and a couple of hours of pleasant flight, we landed in Santiago on time. As soon as the automatic door swung open letting me in to the airport, I noticed a group of people standing before a counter that was used to collect a reciprocity fee. The notice board before the counter showed â€Å"US – $140†. As I didn’t fully understand what a reciprocity fee is and since I was coming in to the country from US, I stood at the back of a very short line counting my $140. When my turn came, I was pleasantly surprised to find out that it applies only to US Citizens and that it is a one-time charge only for the life of the passport. I wondered what the reciprocity fees was and later found out that this was the amount the US charges Chileans entering the country. For that reason, the fees are referred to as â€Å"reciprocity†. After a little research, I found that out of the countries in South America, five of them charge a fee: Argentina, Bolivia, Brazil, Chile and Paraguay. The fees charged are in direct relation to what the home country of the passenger charges residents of the country you are visiting. The fees look like a good source of revenue for these countries. I reached the Atton El Bosque hotel by hiring a taxi from the airport after a little struggle to explain the hotel name and location to the taxi driver. After resting for a while, and after a brief orientation meeting, we started a City Tour. The tour guide who accompanied us was very knowledgeable about the history and culture of Chile. It was a pleasure to see the La Moneda Presidential Palace and was interesting to learn about the history of the palace. Construction of the La Moneda started in 1784 and was constructed to be the country’s official mint, hence the name which translates to The Mint. A wiki entry shows that coins were minted from 1814 to 1929. And, in 1845 the palace became the residence of the president. I learnt an important history of Chile that day about the Chileans having a different 911 to remember about and that was about the military coup d’etat on September 11, 1973. The then Commander-in-chief Augusto Pinochet led the coup against the President Salvador Allende. Despite the air raids and ground attacks on the palace, the President vowed to stay in the presidential palace and rejected the military’s ultimatum to step down. Eventually the President killed himself (although this is questionable and still under scrutiny). The tour guide explained this really well to the group and pointed to a closed door, which was guarded by a uniformed officer, mentioning that the dead body of the president was taken out through this door. After finishing the tour around the palace, we had a stop at Los Dominicos for some artisan shopping and then the first day of the trip officially ended. Later for dinner, we went to a place nearby the hotel and the service was not so good. So we decided to tip him lesser than the 10%. But to our surprise, the waiter stood there demanding for the remaining tip. We didn’t know if it was a Chile culture to tip 10% mandatorily. Later I found out that the livelihood of most of the waiters depends on tips. They may get a minimum salary but it is barely enough to cover transportation. But according to me, the financial dependence on tips doesn’t necessarily mean the waiters/waitresses deserve to get tips for a lousy service. Thus, day one ended with some important lessons learnt about the history and culture of Chile. Throughout the trip we were informed of the importance of the copper industry to the economy of Chile. Day Two We started early on day two for a two hour long trip to the port city of Valparaiso. En route to Valparaiso, the second largest city of Chile, we stopped at a place to refresh ourselves and we saw some Llamas at the back of the store. It was the first time I saw a Llama. Later on a casual talk to one of the hotel staff, I learnt that during the Spanish conquest the Llamas was primarily used to bring down ore from the mines that were atop mountains. But then the introduction of horses and donkeys diminished the importance of Llama as a beast of burden. And, that they are primarily used as a source of food and fiber now. The first thing that came to our attention in Valparaiso was the National Congress of Chile. Our tour guide pointed out that Pinochet shifted the congress from downtown Santiago to Valparaiso. The Chile government, like the USA, has a bicameral legislature. The legislature is made up of the Chamber of Deputies, which is the lower house, and the Senate. Also, we saw the Valparaiso market through the windows of the bus and the guide mentioned that you will get all sorts of stuff (even used goods) at cheap prices in that market. Chile has two Nobel Prize winners and both awards were in the fields of Literature. Our tour itinerary indicated a visit to the house of one of the Nobel laureates, Pablo Neruda. I wondered whether there will be anything interesting to see at a house of a poet. Again, my perceptions turned out to be wrong after entering the house and after listening to the narrations (in English! ) through an audio guide. I liked the way Pablo named everything in his house. And, the view of the port from his window was stunning. Then we trekked down the streets of Valparaiso and walked by the beautiful houses. The guide showed us certain parts of the town overlooking the port that were occupied mainly by the English and a church which had service in German. Later we took a short ride on a funicular, which was used to take the residents up and down the steep hill sides of Valparaiso. The funiculars are now operated just for tourism purpose as the cheap fee that was being charged previously for routine use was not profitable for the operators. Anyhow, it was interesting to ride on a historic means of transport. From there, we proceeded to have lunch at a wonderful restaurant overlooking the sea. The founder of La Bicicleta Verde greeted us during lunch and gave us an introduction to his business. His company, which gives a bicycle tour of the city, was founded with a local partner and through InnovaChile, CORFO, which is the executing agency of government policies in the field of entrepreneurship and innovation. His insights about doing business in Chile were really thought provoking and his discussions revealed the support from the government for such innovations. After that, we took some time off walking along the beach and under the bright sun and then returned to the otel. The second day too was filled with lessons about the culture, business in Chile and about the wonderful poet, Pablo Neruda. Day Three On the third day, we visited the Adolfo Ibanez University that was atop the scenic San Ramon Hill. The University was away from the city and the tour guide told us that many poor people live near that college. Thus, students have been skeptical of travelling to the college as there have been many incidents of robbery. Anyhow, we reached the University from where we could see the whole of Santiago from the hills. There, we attended a lecture from Guillermo Paraje, one of the eminent professors of the University, about the Latin American Economies. The lecture started off with the information that the Latin American countries were only mildly affected by the economic crisis going on around the world. Also, the unemployment rate has been going down along with an increase in the average wage. Most importantly, the increasing price of copper has boosted the growth rate of the Chile economy. The Professor took pride in mentioning that Chile is the first South American country to be an OECD member. One important point that the professor touched upon was the low productivity of labor. He compared the productivity of Korea and Brazil and his graphs showed that Korean has been growing its productivity rate at 4. 7% whereas Brazil has been growing only at 0. 1%. This trend was seen throughout the Latin American countries and is a growing area of concern. Also, the Latin American countries were lagging behind in the service sector. Moreover, there seems to be an increasing gap between the rich and poor. He raised an important point about Chile (or Latin American countries) remaining as a producer of raw materials alone. That is, he mentioned Chile is the leading exporter of copper but it is not a good producer of finished goods based out of copper. This, according to the Professor, should be the long term strategy of all Latin American countries. A casual talk with the Professor after the lecture revealed that Chile is not investing much in renewable and nuclear energy. A recent proposal to invest in nuclear energy was rejected by the Government citing safety issues, especially after the incident in Japan. Being a growing country, Chile could encourage more people to invest in renewable energy. After that, we toured in and around the University and then returned to the bus to be greeted by our smiling bus driver who always referred to us as â€Å"Macho, macho†. Later in the day, we had a presentation about Flora & Fauna Chile Ltda. (Ltda. stands for limitada for limited companies). The mining industries cause a lot of environmental issues and the activities around the mining have an impact on the wildlife around the region. The company does a wonderful job in minimizing the impact to the habitat. The government made it mandatory for these mining companies to get the advice of Flora and Fauna. I was happy to learn that the government is actually interested in preserving the habitat of the various regions and they were assisted by this wonderful set of people who work for Flora and Fauna. Then, we had a presentation from the Managing Director of Banco Santander. The Banco Santander is the leading bank in Latin America. He gave as some good insights about the financial system in Chile and told us that the financial system of Chile ranks among the best in South America. Also, Mr. Martin Perez, described the Pension System of Chile. A reform in late 1980 replaced the pay-as-you go regime with a fully-funded pension system. The third day was filled with lessons about the economics and financial systems of Chile. Day Four The next day, we visited the Frito-Lay, which was located in Cerrillos. The manager of the plant addressed us with some information about the plant. In Latin America, Frito-Lay has 6 production sites and the Cerrillos plant was bought in 2008. One of the slides of the presentation showed a growth rate of around 8% in the volume of sales and a 15% increase in revenue since the inception of the plant. Another graph illustrated a volume of the salty snacks portfolio of Frito-Lay despite having no new line in the site. The manager mentioned the increase was because of an improvement in the efficiency of the site. The manager takes pride in the fact that they have a world-class site in terms of efficiency, service and sustainability. For instance, the plant includes a series of photovoltaic panels on top of the factory that produces around 12KW of electricity. Interestingly, the plant has reduced its water and energy consumption significantly. Also, the future plans for the site includes a reuse of 100% of the wasted water. Once the presentation was done, the manager took us around the factory and showed us the various lines and packaging units. Along the way, he told us that the potatoes are grown under controlled conditions and that it is not the same as the ones used for domestic consumption. On inquiring about some froth lying on the ground, the manager told me that it was the result of an experiment to re-use the starch produced from the potatoes. This was really surprising. Apart from being very sustainable, the company was trying to innovate in various ways. Finally, on inquiring about the software system used in the plant, the manager informed me that they are going to switch to SAP in few months. The plant was going all the way to become very efficient. It was very impressive. After a delicious lunch, we visited CORFO Chile for an introduction to Start Up Chile. This was the most interesting site visit for me. The Start Up Chile is one of the best incubator programs designed to attract entrepreneurs from across the world. It was started by the Chilean Government to convert Chile into an innovation and entrepreneurial hub of Latin America. We were presented with the ways in which an entrepreneur is selected for the program. Apparently, an expert team, including eminent people from USA, selects the best among the applicants. It was also interesting to know that the program has attracted people from India and China and that too, with minimal advertisements in those areas. Through the Start Up Chile program, entrepreneurs work on their projects at Chile and are reimbursed up to $40,000 in cash. During 2011-2012, the program has attracted people from diverse industries including IT, e-Commerce, Energy, Cleantech, etc. The basic idea is to boost the confidence of the local entrepreneurs by sending a message that Chile supports such innovations. The incoming people are also required to earn around 4000 points to successfully complete the program. They earn points in various ways, including giving seminars to local universities and thus, they add value to Chile. I believe this is an amazing idea to boost the economy of a country which is presently dependent on only exports of raw materials. They are building a future which is not necessarily dependent on the export industry and thereby, Chile is on track to building a sustainable future. Day Five We visited the factory of Agricom, suppliers of fresh fruits, on the second last day of our tour. Not surprisingly, the USA is the biggest market for avocados, which are exported from this facility. The company also offers other fruits such as: Grapes, Drupes, Oranges, Apples, etc. Also, Agricom generates more than 2000 jobs for the Chilean labor market. Europe is also an important market for Agricom. As future plans, Agricom is planning to invest in walnuts. The factory visit was very informative and we could feel the urgency with which the workers go on with the various activities. The urgency can be directly attributed to the freshness factor of the fruits. Then, we visited Kross, a microbrewery factory. The founder welcomed us and took the pain in explaining to us the whole process of brewing the beer. On asking whether the recipe can be easily replicated, Mr. Asbjorn explained that he can write down the recipe and give it to me but, it will be very difficult to replicate the same taste as he has the right equipment. He also mentioned that it is not a good business sense to copy another beer. I felt it was a valid point and I learnt an interesting lesson. We, then, had an amazing barbeque lunch at a picturesque building at Vina Mar, which is a famous vineyard in Chile. Later, we visited another wine factory called the Vina Quintay and the Commercial Manager of the company guided us through some wine tasting. Day Six On the final day of the trip, we had the most important topic as presentation – Mining in Chile by a senior official from Kinross, a Canadian gold company. Starting up with some basic facts about Chile, the Vice President informed us that the corruption in Chile is really low and doing business in Chile is very easy. He goes on to say that 28% of the world’s copper reserves are in Chile and that Chile’s economy is heavily dependent on mining. The mining, he said, is concentrated in the northern parts of the country. The work force in Chile is well trained for the mining business and thus makes this an important factor for investing in the mining industry. Chile is the world’s largest producer in copper. They also produce gold and Chile is the 13th largest producer. Interestingly, Chile is the largest producer of Lithium and the fifth largest producer of Silver. The mining industry contributes to 22% of the GDP and 60% of the exports. The mining industry directly employs around 70,000 people and indirectly employs more than 300,000. The Vice President goes on to say more on the challenges faced by the mining industry which includes the dwindling capacity of water, increasing demand for energy, increasing demand for specialized labor and so on. The trip ended on day six after the Kinross presentation. Departure Thus, I prepared to leave Chile after a wonderful trip with wonderful people. I probably learnt a lot of things about Chile in this short trip than I would have learnt if I had read through a book about Chile. I learnt a lot about the economics, the importance of the mining industry, the rich history and culture of Chile, the stable financial system, etc. Experiencing the culture was really important and if I start a business I would definitely look at Chile as the first option. Final lesson: if you pay your room rent and other expenses at the hotel with US Dollars you don’t have to pay sales tax.

Friday, September 27, 2019

BRAZIL Essay Example | Topics and Well Written Essays - 1000 words

BRAZIL - Essay Example Dazzling beaches, lush green forests and ever awake nightlife comes together to make Brazil a magical land. In the following part a brief discussion has been led on the Brazilian Beaches and natural beauty, people, carnival and culture. Rio de Janeiro one of the most beautiful cities of the world might be termed as the crown of Brazilian beauty. Though it stands second to Sao Paulo in terms of population but is the most famous among tourists. Rio lies amidst Guanabara Bay, Copacabana, Ipanema and Leblon beaches and a lush green mountain range. However the most beautiful Brazilian beach is not in Rio or by the ocean; rather it is deep into the heart of the famous rainforests of Amazon. This beach is known as Alter do Chao. Tourists often call Amazon a green inferno considering the hot and humid climate here. If it is true then the mentioned beach is nothing short of a golden paradise. Fernando de Noronha is an archipelago at the north east coast of Brazil. It is place of clear blue wa ter when one can easily spot turtle, octopus, sharks and many other sea lives. The Brazilian authority strictly maintains the number of tourists to keep the disturbance to the natural habitat at minimal. This might be one of the most important causes that the food chain has remained unaffected here and the sharks therefore can find plenty to eat without targeting the human beings. Praia do Toque is a beach that is a bit isolated from the hue and cries of the day to day life and therefore offers perfect leisure time (McOwan). These are only a few beaches that have been mentioned here apart from these there is many others which by no means any less appealing to the tourists. Apart from beaches the rain forest of Amazon is another attraction in Brazilian tourist Map (Gray). The Brazilian part of the Amazon rain forest displays a diverse eco system and lies in the northern part of the country. At the very centre of the Brazilian part of the Amazon rain forest is the world famous Pantana l. Considering the immense natural diversity and unique eco system of the mentioned place, it has been recognised as the Patrimony of Mankind by UNESCO. With its dense vegetation that is highest in America, Pantanal is the richest and most divers of the eco systems in the world. However, Brazil is not only a country of sand, beaches and forests; it is the home of one of the most spectacular falls of the world. The Iguazu falls located almost at the border of Brazil and Argentina memorises with its immense beauty and perhaps the most alluring natural features of Brazil. (Brazil) Among all the attractions of Brazil perhaps Brazilian people occupies the first place considering their overwhelming warmth, friendliness and intense passion towards enjoying life. Like its natural diversity the people of Brazil are also diversified. The whites and the browns occupy the lion’s share followed by blacks. There are also traces of Asian and Amerindians. The intermarriage between the indige nous people, Portuguese settlers and the African slaves who were brought into Brazil to work in fields reproduced the browns that include Caboclos, Mulattos and Cafuzos. (Brazil and Africa) Most of the country’s population live in and around the urban centres of the country and the urban population displays a higher literacy rate than that of the rural population. Overall the country has a high literacy rate. The people of the country are predominantly catholic, though over that last

Thursday, September 26, 2019

Art History Essay Example | Topics and Well Written Essays - 1250 words - 5

Art History - Essay Example Herein, it should be noted that criteria of Claire Bishop, Jacques Rancià ©re, and Willi Bongard will be applied in this paper to assess the position of Tracy Emin. As a matter of fact, various research works have been carried out in order to assess the impact that artists have made on the art market. At the same time, art market has been greatly impacted by demand of quality. It is due to this reason that a number of viewpoints were inhibited to assess and judge art work. Thus, the subjective nature of judgments is further questioned as far as dematerialization of arts is concerned. At this point, Claire Bishop’s criterion rather helps in judging a work to be good in the contemporary arts. Claire Bishop stated that any art work can be considered as good if only it tends to follow roots of aesthetics and it allows enough space that can help one argue and question. In addition, Claire Bishop also marked that a good work creates a hope for development anew. Claire Bishop, who is a well-known curator and art critic, maintains the viewpoint that art rather became a collaborative practice for most of the artist in the beginning of 20th century. It was the era when communism collapsed from the society. At this point, there was little or no difference between art and society itself. The criterion of Claire Bishop notes that there are two significant wings of art revolution. Firstly, it is the painting and sculpture that is being created for the need of art market while on the other hand; there are creative artists of the modern world who are able to undertake radical work. Bishop has been observed to call modern artists to be avante-garde. As per the analysis of Claire Bishop of a good art work, it comes to understanding that an art work needs to be defined within the context of aesthetes. In this case, Tracy Emin can notably regard as a generator of good art. It is because her artistic works

Colonialism and Disease in Cholera, Kuru and Anthrax Essay

Colonialism and Disease in Cholera, Kuru and Anthrax - Essay Example Spain in the early 1600s and other huge nations was looking to develop land in the new world for themselves and get gold, silver, and power whilst converting natives. After coming to the new world, the Spanish explorer, conquering the natives and built settlements. However, with an increased rate colonialism, many historians observe that the rate of contagious some particular disease also increased and the western medication is another justification for promoting colonialism. Historians such as Roland Chrisjohn and John S. Milly from Canada have since published documents showing evidence on how the discussion about disease spread concealed by colonialists to hide the actual origins of the natives were infected with the new diseases. Historians have stated that European colonists on discovering that the indigenous people were not immune to certain diseases, they deliberately spread the diseases for military advantages and to subjugate the local people. Therefore, the correlation between colonialism and disease can be examined in following disease cases: Cholera in India, Kuru in eastern Highlands of New Guinea and smallpox in China during Late Imperial Time. Cholera, as defined as an Asian or Indian disease during the 19th century (Nappi, Lecture 3.1 21 January 2014), was rampant in India and also in the industrially developed country as the United Kingdom. It spread across the world from its source in the Ganges delta, in India. Cholera is an acute diarrhoeal and virulent disease that affects both children and adults and kills within hours if left untreated. Effective control of cholera relies on the preparedness, prevention, and response. According to most European and American physicians, Cholera was a locally produced miasmatic disease which was brought about by direct exposure to filthy and decayed products.  

Wednesday, September 25, 2019

Politeness in Discourse Analysis Essay Example | Topics and Well Written Essays - 3000 words

Politeness in Discourse Analysis - Essay Example    Due to its expansive scope, politeness has been a subject of interest to academics in various disciplines, including linguistics, cognitive psychology, social psychology, philosophy, communication, and others (Chimombo & Roseberry 1998). A potent instrument for attaining control over an interpreter is politeness. The concept of politeness obviously fulfills a major function in the level of cooperation among participants in the dialogue. Politeness is cultural in nature (Martin 1993). As argued by Goffman (1956), what makes politeness crucial is the reality that discourses commonly give the interpreter a ‘face threatening act’. Negative responses, such as refusals, are one instance of such an act. If people ask courteously for something and are brusquely turned down, then they may feel humiliated or offended. People of several cultures view such straightforward conduct as a threat to one’s face, implying the personal image that the individual shows in a dialogue. If one individual insults another by performing a face-threatening act, the reply, in contemporary colloquial or informal English, could be â€Å"Get outta my face!† (Holtgraves 2002: 39). The extent of frankness that an individual could tolerate without sensing that a face-threatening act has been performed seems to rely greatly on culture.  Ã‚  Efforts that have been made to furnish an explanation of politeness that is wide-ranging enough to be relevant across cultural frontiers have been fairly broadly criticised (Fraser 1990).  

Tuesday, September 24, 2019

The Legal and Ethical Environment of Business Essay

The Legal and Ethical Environment of Business - Essay Example This is not the case where recoveries can be made directly from the agent or person causing the harm, rather the principle behind vicarious liability is that an employer exerts control over the physical conduct of an agent and is therefore responsible for the harmful conduct. In a recent case, Arena Group 2000, the maker of a sign that fell on a San Diego man and paralyzed him, was held vicariously liable for the injury caused.2 There are also several cases where private actions for securities fraud under Section 10(b) show that corporate officers and law firms are being held vicariously liable for preparing misleading disclosure documents.3 It was held in this case that even where secondary agents are involved, where they participate in a fraudulent activity to an extent which could characterize them as authors or co-authors they may be liable for damages accruing from such harmful activity4. Another case where vicarious liability for tort was imposed upon an employer was in the cas e of American Society of Mechanical Engineers Inc v Hydrolevel Corp5 where common law agency principles were used to impute liability upon an employer in a position of sufficient authority to exert control.An employer can also become liable for vicarious liability for the harm caused by its employees under a theory of negligent hiring, where adequate checking of references and skills are not carried out by an employer before the hiring is completed6. The Strict liability rule may be enforced in corporations such as DWI and especially public corporations where the corporation will be expected to assume the liability for the tortious acts of its employees.

Monday, September 23, 2019

UKs Land Use Planning Essay Example | Topics and Well Written Essays - 1250 words

UKs Land Use Planning - Essay Example This responsibility is vested with the Office of the Deputy Prime Minister in England, in Wales and Scotland the Welsh Assembly Government and the Scottish Executive respectively. In addition these departments have to develop national planning policy guidance within which local authorities have to function (British Geological Survey, n.d.). The endeavour of the Planning Practice Standard is to develop the environmental impact assessment, EIA, as a planning tool in order to promote the objectives of town and country planning. "This PPS updates the RTPI Practice Advice Note 13, published in 1995, to reflect the requirements of the amended EIA Regulations, which came into effect in 1999". In order to implement the European Directive 85/337/EEC, as amended by the Directive 97/11/EC, legislation on environmental impact assessment has been introduced in the UK. Section 71A of the Town and Country Planning Act 1990, contains the requirement to carry out EIA of certain planning proposals. (The Royal Town Planning Institute, 2001). The use of land determines irrevocably the fate of natural and semi-natural ecosystems and consequently, sustainable development is ably assisted by Nature conservation policies and their relations with land use exemplify the importance being accorded to planning. This process makes it essential to establish fundamental links between developments in particular localities and environmental changes on a world - wide basis. This methodology requires the adoption of a strategic approach to the conservation and enhancement of biodiversity (Cowell and Owens 2002). The basic human responsibility to protect and improve the environment for the benefit of present and future generations was expressed on the global level as early as 1972, in principle 1 of the Stockholm Declaration, but the Aarhus Convention is the first international legal instrument to extend this concept to a set of legal obligations (Stec and Casey - Lefkowitz 2000). Land-use planning is concerned not only with site protection but of late; it is proving of immense relevance in the adoption of a proactively strategic approach to the conservation of nature. This approach must not only concentrate on preservation of what has survived but more importantly, it has to address itself to the problem of habitat restoration and enhancement. In the UK this change is visible in legislation and in the guidance being provided to the local planning authorities from government, statutory agencies and non-governmental organisations, for ensuring the protection of the biodiversity. Planning and nature conservation policy have been influenced to a great extent by the latest interpretations of sustainable development, especially those which involve the concepts of environmental capital and capacity. The role of land use planning has been highlighted by European legislation and in particular the Habitats Directive, which aims to conserve European species and habitats . This Directive, enjoins upon national governments the requirement to nominate Special Areas of Conservation (SACs), which are to be provided with stringent protection. That there are limitations to this approach is borne out by the fact that, despite their protected status, many sites have been lost or damaged as a result of land-use change. The major culprit in this aspect has been development as defined in town and country planning legislation, which has

Sunday, September 22, 2019

Jc Penny Essay Example for Free

Jc Penny Essay This is not the first time that this company has been faced with adversity. The first time was in the 1960’s when shopping went from downtown locations to more uptown locations in malls. The company transitioned to mall locations to cope with the change. This time the change did not come easy to the company. In fact this change has cost the company millions. This time JC Penney’s was faced with a challenge that they wanted to change. They wanted to transition the public’s perception of them. They no longer wanted to be viewed as an old fashioned department store. The company no longer wanted sales or clearance racks. They wanted to change the whole retail climate. They called it fair and square pricing (Baskin, 2013). This came off a lot like Wal-Mart’s always low prices campaign. This sounds like a great idea to me. However, it failed for many reasons. The main reason because it was confusing to consumers. While the other main reason being poor marketing. Many people sat in anticipation of this new campaign by JCPenney’s. There were just as many supporters in the beginning as well. When I heard of this I thought of an upscale Wal-Mart. Low prices I do not have to shop for sales anymore because these should be low prices every day. However, very shortly after this I found myself not shopping there at all. Consumers want a deal, and they do not feel that deal when they shop there anymore. It is the thrill of the hunt for consumers. Not only that but the sales ad and clearance racks used to change. They are no longer changing prices so there is no need to go daily, weekly, or even monthly. Customers may check there as a way to show case, but they are not buying. Without the sales and without the sale advertisements the company is not bringing in nearly the amount of people that were coming into the store to score the best deal. Next the advertisements they are sending out are worded poorly. They are no longer doing sales but they do mail out what they called month long value. Customers did not understand the wording of it. It was never broken down for them. Ideally they had sales, but they were not called the standard name. Therefore, customers missed out on them and they were not bringing in the clientele like a â€Å"sale† probably would have. They were not able to embrace JC Penney’s new tactic. Another problem with this campaign is that the average consumer does not know what the clothing costs. Therefore they think it should be or could be marked down. They have no idea if they were getting a good deal or not. Again the thrill of the hunt is gone, and still makes the customers confused. It was confusing to customers and that means there is a problem in marketing. When a place makes changes that could potentially be confusing marketing is the key. However, there advertisements were so irreverent that they made even less sense to begin with. They came up with a campaign after their numbers dropped called â€Å"do the math. † It was supposed to show how much easier it is to just get a low price in the beginning rather than use a coupon. This action failed for the company. The CEO Ron Johnson came out and reported later that â€Å"it was confusing† to some of their consumers (Baskin, 2013). It’s no wonder that they lost customers. They did not target other competitors about their prices just what the company was trying to do. Last but not least they attempted to open little stores inside their stores. It was a Martha Stewart collection like IKEA. Even that failed because Martha Stewart was not able to put her name on it, because she was still in litigation over her brand. So, it was still branded as JC Penney’s. Not that the name would have made much difference, but it was not thoroughly hought out within the company. Also, this is not a new tactic stores have been doing this for years. The renovation of the stores to add in this small store was costly. It has cost the company millions of dollars. It has depleted their cash, and has also caused their credit rating to drop (Baskin, 2013). This was a costly decision to make when sale s were already down. Here is the largest problem that they had they wanted become a high end store in a low end economy. If I were the CEO of JC Penney I would make quite a few changes. My first change would have been to go back to traditional wording for now. These are the words that customers are the most familiar with. I understand that some companies like to do all their changes at once because it is cheaper. However, when you are changing familiar terms it is wise to do it slowly. Or at least explain it as thoroughly as possible. Change is needed as a society, but no one likes change, because of this I feel that they should be done slowly and over a period of time. I would also have changed the price tags on their merchandise. In order to make someone feel like they are getting a deal I would change how they were priced. I would put a suggested retail price and then put â€Å"our† price on the label. This would appear to customers that they were getting a deal. Sales are because the prices are higher than what they need to be. The advertising is all about â€Å"trickery† to pull people in. In reality they were still doing sales but they were not called sales, and people did not buy into it. By changing the price tags on the items the customers are still getting the thrill of the buy. They can see what their item is going for at their competitor’s location, and impulse buy. This helps eliminate â€Å"showroom† shopping. Or leaving to check their prices somewhere else. If it is a matter of a few dollars they will not go back to purchase. However, if they can see the deal they will buy. Instead of focusing on expanding a business inside of an already expanding business I would have spent the money elsewhere. Imagine if they could have established new rules for buyers. Gone back to JC Penney’s original roots and prove their claims. It could have created new financing and lay-away policies that communicated value, and used social media to create meaningful communities of consumers who wanted to track and participate in conversations about prices. Employees could have been recruited and trained to offer a fundamentally new customer experience based on integrity. They could have changed the way Americans shop and feel they should stop. I would not have wasted money on an advertisement that was bewildering. I would have spent money marketing on calling competitors out on their prices. Sharing the news on how Penney’s was changing. How they were forward looking. Instead of making confusing ads with no sales just to avoid the word sale I wouldn’t have tried to stay away from it. Since they were still doing sales but not doing sales on certain items. Limiting the sales options were not the problem the problem was using unfamiliar wording. Measuring some of these techniques could be hard to do. Going back to traditional wording would be one way that is hard to track. However, I believe it would go hand in hand with how you would track the new price tags. That would be sales. With these new changes and advertisements I would think that sales would increase. I would not look at the actual accounting book but do a twelve month comparison on the sales on each individual store. This is time consuming and costly but I think it is the only way to see how each store is doing in comparison to how they were doing the previous month and year. During high sale times I would make sure I would have as much staff as possible on the floor to assist our customers. Maybe they do not need help but a casual conversation can lead into why they came into this department store and not the one across the way. Along with this I would like to institute team meetings once a week where department heads meet with their front line employees on all shifts. To find out their ideas and where they are hearing concerns are. Then I would have them write them up and do a teleconference with each store head to hear these ideas, questions, or concerns. I feel this is an open door policy. I would also include suggestion boxes not only in the store, but in the break room for employees so they could bring these up anonymously if they felt the need to. Also, I would work on getting the contact information to employees for everyone in charge. Change can happen and many great ideas come from the front line, because they see and do it every day. However, their voices are not often heard. To measure the effectiveness of advertising I would do a few things. I would add a survey at the end of their receipt to figure out what they thought about the advertisement. I would also add a quick questionnaire in the store that the customer could fill out. I would also make it known that there is a number they can call at any time with questions. I would make it so that they could be heard with questions and concerns. Before I launched a campaign I would have a test market so that we could see what people could recall from the add, as well as find out if there was any confusion on what may have been advertised. The sales would play a large part also in whether it was an effective campaign. A company that has been operating for 100 years is struggling. JC Penney ’s was once a fashion icon to children, young adults, and teens. Beginning in 1913 it currently operates over 1000 stores. Growing up my sister and I waited to go through their catalogue. However, in the last few years something has changed. The company didn’t look far enough ahead to the future to predict these changes. They tried to become a higher end boutique like store in an economy that could not support it. Poor marketing and too many changes has made this one booming store one of the top ten stores that are predicted to be out of business in the next year. Works Cited Baskin, J. (2013, January 2). Lessons From JC Penneys Doomed Marketing Makeover. Retrieved May 12, 2013 , from Forbes: http://www. forbes. com/sites/jonathansalembaskin/2013/01/02/lessons-from-j-c-penneys-doomed-marketing-makeover/ Tuttle, B. (2012, June 19). More Troubles for JCPenney: Top Executive Departs Amid Sales Slump. Retrieved May 12, 2013, from Time Magazine: http://business. time. com/2012/06/19/more-troubles-for-jcpenney-top-executive-departs-amid-sales-slump/

Saturday, September 21, 2019

Performance Measure of PCA and DCT for Images

Performance Measure of PCA and DCT for Images Generally, in Image Processing the transformation is the basic technique that we apply in order to study the characteristics of the Image under scan. Under this process here we present a method in which we are analyzing the performance of the two methods namely, PCA and DCT. In this thesis we are going to analyze the system by first training the set for particular no. Of images and then analyzing the performance for the two methods by calculating the error in this two methods. This thesis referred and tested the PCA and DCT transformation techniques. PCA is a technique which involves a procedure which mathematically transforms number of probably related parameters into smaller number of parameters whose values dont change called principal components. The primary principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the separate Karhunen-Loà ¨ve transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). DCT expresses a series of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. Transformations are important to numerous applications in science and engineering, from lossy compression of audio and images (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations. CHAPTER 1 INTRODUCTION 1.1 Introduction Over the past few years, several face recognition systems have been proposed based on principal components analysis (PCA) [14, 8, 13, 15, 1, 10, 16, 6]. Although the details vary, these systems can all be described in terms of the same preprocessing and run-time steps. During preprocessing, they register a gallery of m training images to each other and unroll each image into a vector of n pixel values. Next, the mean image for the gallery is subtracted from each  and the resulting centered images are placed in a gallery matrix M. Element [i; j] of M is the ith pixel from the jth image. A covariance matrix W = MMT characterizes the distribution of the m images in Ân. A subset of the Eigenvectors of W are used as the basis vectors for a subspace in which to compare gallery and novel probe images. When sorted by decreasing Eigenvalue, the full set of unit length Eigenvectors represent an orthonormal basis where the first direction corresponds to the direction of maximum variance i n the images, the second the next largest variance, etc. These basis vectors are the Principle Components of the gallery images. Once the Eigenspace is computed, the centered gallery images are projected into this subspace. At run-time, recognition is accomplished by projecting a centered  probe image into the subspace and the nearest gallery image to the probe image is selected as its match. There are many differences in the systems referenced. Some systems assume that the images are registered prior to face recognition [15, 10, 11, 16]; among the rest, a variety of techniques are used to identify facial features and register them to each other. Different systems may use different distance measures when matching probe images to the nearest gallery image. Different systems select different numbers of Eigenvectors (usually those corresponding to the largest k Eigenvalues) in order to compress the data and to improve accuracy by eliminating Eigenvectors corresponding to noise rather than meaningful variation. To help evaluate and compare individual steps of the face recognition process, Moon and Phillips created the FERET face database, and performed initial comparisons of some common distance measures for otherwise identical systems [10, 11, 9]. This paper extends their work, presenting further comparisons of distance measures over the FERET database and examining alternative way of selecting subsets of Eigenvectors. The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are pred iction, redundancy removal, feature extraction, data compression, etc. Because PCA is a classical technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc. Face recognition has many applicable areas. Moreover, it can be categorized into face identification, face classification, or sex determination. The most useful applications contain crowd surveillance, video content indexing, personal identification (ex. drivers license), mug shots matching, entrance security, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This can be called eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance matrix derived from a set of facial images(vectors). The details are described i n the following section. PCA computes the basis of a space which is represented by its training vectors. These basis vectors, actually eigenvectors, computed by PCA are in the direction of the largest variance of the training vectors. As it has been said earlier, we call them eigenfaces. Each eigenface can be viewed a feature. When a particular face is projected onto the face space, its vector into the face space describe the importance of each of those features in the face. The face is expressed in the face space by its eigenface coefficients (or weights). We can handle a large input vector, facial image, only by taking its small weight vector in the face space. This means that we can reconstruct the original face with some error, since the dimensionality of the image space is much larger than that of face space. A face recognition system using the Principal Component Analysis (PCA) algorithm. Automatic face recognition systems try to find the identity of a given face image according to their memory. The memory of a face recognizer is generally simulated by a training set. In this project, our training set consists of the features extracted from known face images of different persons. Thus, the task of the face recognizer is to find the most similar feature vector among the training set to the feature vector of a given test image. Here, we want to recognize the identity of a person where an image of that person (test image) is given to the system. You will use PCA as a feature extraction algorithm in this project. In the training phase, you should extract feature vectors for each image in the training set. Let  ­A be a training image of person A which has a pixel resolution of M  £ N (M rows, N columns). In order to extract PCA features of  ­A, you will first convert the image into a pixel vector à A by concatenating each of the M rows into a single vector. The length (or, dimensionality) of the vector à A will be M  £N. In this project, you will use the PCA algorithm as a dimensionality reduction technique which transforms the vector à A to a vector !A which has a imensionality d where d  ¿ M  £ N. For each training image  ­i, you should calculate and store these feature vectors !i. In the recognition phase (or, testing phase), you will be given a test image  ­j of a known person. Let  ®j be the identity (name) of this person. As in the training phase, you should compute the feature vector of this person using PCA and obtain !j . In order to identify  ­j , you should compute the similarities between !j and all of the feature vectors !is in the training set. The similarity between feature vectors can be computed using Euclidean distance. The identity of the most similar !i will be the output of our face recogn izer. If i = j, it means that we have correctly identified the person j, otherwise if i 6= j, it means that we have misclassified the person j. 1.2 Thesis structure: This thesis work is divided into five chapters as follows. Chapter 1: Introduction This introductory chapter is briefly explains the procedure of transformation in the Face Recognition and its applications. And here we explained the scope of this research. And finally it gives the structure of the thesis for friendly usage. Chapter 2: Basis of Transformation Techniques. This chapter gives an introduction to the Transformation techniques. In this chapter we have introduced two transformation techniques for which we are going to perform the analysis and result are used for face recognition purpose Chapter 3: Discrete Cosine Transformation In this chapter we have continued the part from chapter 2 about transformations. In this other method ie., DCT is introduced and analysis is done Chapter 4: Implementation and results This chapter presents the simulated results of the face recognition analysis using MATLAB. And it gives the explanation for each and every step of the design of face recognition analysis and it gives the tested results of the transformation algorithms. Chapter 5: Conclusion and Future work This is the final chapter in this thesis. Here, we conclude our research and discussed about the achieved results of this research work and suggested future work for this research. CHAPTER 2 BASICs of Image Transform Techniques 2.1 Introduction: Now a days Image Processing has been gained so much of importance that in every field of science we apply image processing for the purpose of security as well as increasing demand for it. Here we apply two different transformation techniques in order study the performance which will be helpful in the detection purpose. The computation of the performance of the image given for testing is performed in two steps: PCA (Principal Component Analysis) DCT (Discrete Cosine Transform) 2.2 Principal Component Analysis: PCA is a technique which involves a procedure which mathematically transforms number of possibly correlated variables into smaller number of uncorrelated variables called principal components. The first principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the discrete Karhunen-Loà ¨ve transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). Now PCA is mostly used as a tool in exploration of data analysis and for making prognostic models. PCA also involves calculation for the Eigen value decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centring the data from each attribute. The results of this analysis technique are usually shown in terms of component scores and also as loadings. PCA is real Eigen based multivariate analysis. Its action can be termed in terms of as edifying the inner arrangement of the data in a shape which give details of the mean and variance in the data. If there is any multivariate data then its visualized as a set if coordinates in a multi dimensional data space, this algorithm allows the users having pictures with a lower aspect reveal a shadow of object in view from a higher aspect view which reveals the true informative nature of the object. PCA is very closely related to aspect analysis, some statistical software packages purposely conflict the two techniques. True aspect analysis makes different assumptions about the original configuration and then solves eigenvectors of a little different medium. 2.2.1 PCA Implementation: PCA is mathematically defined as an orthogonal linear transformation technique that transforms data to a new coordinate system, such that the greatest variance from any projection of data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform technique for given data in least square terms. For a data matrix, XT, with zero empirical mean ie., the empirical mean of the distribution has been subtracted from the data set, where each row represents a different repetition of the experiment, and each column gives the results from a particular probe, the PCA transformation is given by: Where the matrix ÃŽÂ £ is an m-by-n diagonal matrix, where diagonal elements ae non-negative and W  ÃƒÅ½Ã‚ £Ãƒâ€šÃ‚  VT is the singular value decomposition of  X. Given a set of points in Euclidean space, the first principal component part corresponds to the line that passes through the mean and minimizes the sum of squared errors with those points. The second principal component corresponds to the same part after all the correlation terms with the first principal component has been subtracted from the points. Each Eigen value indicates the part of the variance ie., correlated with each eigenvector. Thus, the sum of all the Eigen values is equal to the sum of squared distance of the points with their mean divided by the number of dimensions. PCA rotates the set of points around its mean in order to align it with the first few principal components. This moves as much of the variance as possible into the first few dimensions. The values in the remaining dimensions tend to be very highly correlated and may be dropped with minimal loss of information. PCA is used for dimensionality reduction. PCA is optimal linear transformation technique for keep ing the subspace which has largest variance. This advantage comes with the price of greater computational requirement. In discrete cosine transform, Non-linear dimensionality reduction techniques tend to be more computationally demanding in comparison with PCA. Mean subtraction is necessary in performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component will instead correspond to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data. Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set x can be defined as: With the first k  Ãƒ ¢Ã‹â€ Ã¢â‚¬â„¢Ãƒâ€šÃ‚  1 component, the kth component can be found by subtracting the first k à ¢Ã‹â€ Ã¢â‚¬â„¢ 1 principal components from x: and by substituting this as the new data set to find a principal component in The other transform is therefore equivalent to finding the singular value decomposition of the data matrix X, and then obtaining the space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WL: The matrix W of singular vectors of X is equivalently the matrix W of eigenvectors of the matrix of observed covariances C = X XT, The eigenvectors with the highest eigen values correspond to the dimensions that have the strongest correlation in the data set (see Rayleigh quotient). PCA is equivalent to empirical orthogonal functions (EOF), a name which is used in meteorology. An auto-encoder neural network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K principal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors. PCA is a popular primary technique in pattern recognition. But its not optimized for class separability. An alternative is the linear discriminant analysis, which does take this into account. 2.2.2 PCA Properties and Limitations PCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set. It is a non-parametric analysis and the answer is unique and independent of any hypothesis about data probability distribution. However, the latter two properties are regarded as weakness as well as strength, in that being non-parametric, no prior knowledge can be incorporated and that PCA compressions often incur loss of information. The applicability of PCA is limited by the assumptions[5] made in its derivation. These assumptions are: We assumed the observed data set to be linear combinations of certain basis. Non-linear methods such as kernel PCA have been developed without assuming linearity. PCA uses the eigenvectors of the covariance matrix and it only finds the independent axes of the data under the Gaussian assumption. For non-Gaussian or multi-modal Gaussian data, PCA simply de-correlates the axes. When PCA is used for clustering, its main limitation is that it does not account for class separability since it makes no use of the class label of the feature vector. There is no guarantee that the directions of maximum variance will contain good features for discrimination. PCA simply performs a coordinate rotation that aligns the transformed axes with the directions of maximum variance. It is only when we believe that the observed data has a high signal-to-noise ratio that the principal components with larger variance correspond to interesting dynamics and lower ones correspond to noise. 2.2.3 Computing PCA with covariance method Following is a detailed description of PCA using the covariance method . The goal is to transform a given data set X of dimension M to an alternative data set Y of smaller dimension L. Equivalently; we are seeking to find the matrix Y, where Y is the KLT of matrix X: Organize the data set Suppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L Write as column vectors, each of which has M rows. Place the column vectors into a single matrix X of dimensions M ÃÆ'- N. Calculate the empirical mean Find the empirical mean along each dimension m = 1,  ,  M. Place the calculated mean values into an empirical mean vector u of dimensions M ÃÆ'- 1. Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence we proceed by centering the data as follows: Subtract the empirical mean vector u from each column of the data matrix X. Store mean-subtracted data in the M ÃÆ'- N matrix B. where h is a 1  ÃƒÆ'-  N row vector of all  1s: Find the covariance matrix Find the M ÃÆ'- M empirical covariance matrix C from the outer product of matrix B with itself: where is the expected value operator, is the outer product operator, and is the conjugate transpose operator. Please note that the information in this section is indeed a bit fuzzy. Outer products apply to vectors, for tensor cases we should apply tensor products, but the covariance matrix in PCA, is a sum of outer products between its sample vectors, indeed it could be represented as B.B*. See the covariance matrix sections on the discussion page for more information. Find the eigenvectors and eigenvalues of the covariance matrix Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: where D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as MATLAB[7][8], Mathematica[9], SciPy, IDL(Interactive Data Language), or GNU Octave as well as OpenCV. Matrix D will take the form of an M ÃÆ'- M diagonal matrix, where is the mth eigenvalue of the covariance matrix C, and Matrix V, also of dimension M ÃÆ'- M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C. The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector. Rearrange the eigenvectors and eigenvalues Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. Make sure to maintain the correct pairings between the columns in each matrix. Compute the cumulative energy content for each eigenvector The eigenvalues represent the distribution of the source datas energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through m: Select a subset of the eigenvectors as basis vectors Save the first L columns of V as the M ÃÆ'- L matrix W: where Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that Convert the source data to z-scores Create an M ÃÆ'- 1 empirical standard deviation vector s from the square root of each element along the main diagonal of the covariance matrix C: Calculate the M ÃÆ'- N z-score matrix: (divide element-by-element) Note: While this step is useful for various applications as it normalizes the data set with respect to its variance, it is not integral part of PCA/KLT! Project the z-scores of the data onto the new basis The projected vectors are the columns of the matrix W* is the conjugate transpose of the eigenvector matrix. The columns of matrix Y represent the Karhunen-Loeve transforms (KLT) of the data vectors in the columns of matrix  X. 2.2.4 PCA Derivation Let X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find a Orthonormal transformation matrix P such that with the constraint that is a diagonal matrix and By substitution, and matrix algebra, we obtain: We now have: Rewrite P as d column vectors, so and as: Substituting into equation above, we obtain: Notice that in , Pi is an eigenvector of the covariance matrix of X. Therefore, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints. CHAPTER 3 DISCRETE Cosine transform 3.1 Introduction: A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in engineering, from lossy compression of audio and images, to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical in these applications: for compression, it turns out that cosine functions are much more efficient, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT; its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transforms (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transforms (MDCT), which is based on a DCT of overlapping data. 3.2 DCT forms: Formally, the discrete cosine transform is a linear, invertible function F  : RN -> RN, or equivalently an invertible N ÃÆ'- N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers x0, , xN-1 are transformed into the N real numbers X0, , XN-1 according to one of the formulas: DCT-I Some authors further multiply the x0 and xN-1 terms by à ¢Ã‹â€ Ã… ¡2, and correspondingly multiply the X0 and XN-1 terms by 1/à ¢Ã‹â€ Ã… ¡2. This makes the DCT-I matrix orthogonal, if one further multiplies by an overall scale factor of , but breaks the direct correspondence with a real-even DFT. The DCT-I is exactly equivalent, to a DFT of 2N à ¢Ã‹â€ Ã¢â‚¬â„¢ 2 real numbers with even symmetry. For example, a DCT-I of N=5 real numbers abcde is exactly equivalent to a DFT of eight real numbers abcdedcb, divided by two. Note, however, that the DCT-I is not defined for N less than 2. Thus, the DCT-I corresponds to the boundary conditions: xn is even around n=0 and even around n=N-1; similarly for Xk. DCT-II The DCT-II is probably the most commonly used form, and is often simply referred to as the DCT. This transform is exactly equivalent to a DFT of 4N real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n + 1 = xn for , and y4N à ¢Ã‹â€ Ã¢â‚¬â„¢ n = yn for 0 Some authors further multiply the X0 term by 1/à ¢Ã‹â€ Ã… ¡2 and multiply the resulting matrix by an overall scale factor of . This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. The DCT-II implies the boundary conditions: xn is even around n=-1/2 and even around n=N-1/2; Xk is even around k=0 and odd around k=N. DCT-III Because it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as the inverse DCT (IDCT). Some authors further multiply the x0 term by à ¢Ã‹â€ Ã… ¡2 and multiply the resulting matrix by an overall scale factor of , so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output. The DCT-III implies the boundary conditions: xn is even around n=0 and odd around n=N; Xk is even around k=-1/2 and even around k=N-1/2. DCT-IV The DCT-IV matrix becomes orthogonal if one further multiplies by an overall scale factor of . A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT) (Malvar, 1992). The DCT-IV implies the boundary conditions: xn is even around n=-1/2 and odd around n=N-1/2; similarly for Xk. DCT V-VIII DCT types I-IV are equivalent to real-even DFTs of even order, since the corresponding DFT is of length 2(Nà ¢Ã‹â€ Ã¢â‚¬â„¢1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-VIII). In principle, there are actually four additional types of discrete cosine transform, corresponding essentially to real-even DFTs of logically odd order, which have factors of N ±Ãƒâ€šÃ‚ ½ in the denominators of the cosine arguments. Equivalently, DCTs of types I-IV imply boundaries that are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. DCTs of types V-VIII imply boundaries that even/odd around a data point for one boundary and halfway between two data points for the other boundary. However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below. Inverse transforms Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N-1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa. Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of à ¢Ã‹â€ Ã… ¡2 (see above), this can be used to make the transform matrix orthogonal. Multidimensional DCTs Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension. For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2d DCT-II is given by the formula (omitting normalization and other scale factors, as above): Two-dimensional DCT frequencies Technically, computing a two- (or multi-) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order. The inverse of a multi-dimensional DCT is just a separable product of the inverse(s) of the corresponding one-dimensional DCT(s), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm. The image to the right shows combination of horizontal and vertical frequencies for an 8 x 8 (N1 = N2 = 8) two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data (88) is transformed to a linear combination of these 64 frequency squares. Chapter 4 IMPLEMENTATION AND RESULTS 4.1 Introduction: In previous chapters (chapter 2 and chapter 3), we get the theoretical knowledge about the Principal Component Analysis and Discrete Cosine Transform. In our thesis work we have seen the analysis of both transform. To execute these tasks we chosen a platform called MATLAB, stands for matrix laboratory. It is an efficient language for Digital image processing. The image processing toolbox in MATLAB is a collection of different MATAB functions that extend the capability of the MATLAB environment for the solution of digital image processing problems. [13] 4.2 Practical implementation of Performance analysis: As discussed earlier we are going to perform analysis for the two transform methods, to the images as, <

Friday, September 20, 2019

Perceptions of Catholic School Teachers

Perceptions of Catholic School Teachers PART ONE: RESEARCH DESIGN A title for your study A mixed methods study of student perceptions of the qualities of highly effective traditional Catholic secondary school teachers. A description of the research context/background/problem leading to your study (three to five sentences approximately) It is a well-established and researched fact that individually and collectively, teachers play a pivotal role in facilitating student success (Duncan, Gurria, Leeuwen, 2011; Hattie, 2009; Reddy, Fabiano, Jimerson, 2013). Moreover, there is much evidence to suggest that the level of teacher quality is linked with the level of student performance (Darling-Hammond Youngs, 2002; Goe Stickler, 2008; Hattie, 2003; Leigh, 2010). Although recent research on teacher quality has identified general traits (Canter, 2014; Coles, Owens, Serrano, Slavec, Evans, 2015; Cooper, Hirn, Scott, 2015; Spilt, Hughes, Wu, Kwok, 2012), Jimerson Haddock (2015) advocate further research regarding teacher effectiveness (p488). Consequently, this study proposes looking into teacher effectiveness within the sphere of traditional Catholicism, an area which is gradually burgeoning (Society of Saint Pius X, 2016; Wikipedia, 2017a) and hitherto unexplored. Additionally, as Sutcliffe (2011) suggests, it is impo rtant to examine high school students perceptions of teacher quality (p24). Describe your study aims The aim of this mixed methods exploratory sequential study is to explore and examine student perceptions of traditional Catholic secondary school teacher effectiveness in order to identify and eventually come up with a list of ten top characteristics for this group. Findings will be used to inform future teacher education programmes which have a specific emphasis on training Traditional Catholic teachers. The nature of the exploratory sequential study is such that it will be divided into two stages (Onwuegbuzie Collins, 2007). The aim of the first stage of the study is to collect qualitative data from select samples of students in order to then identify general themes/characteristics using an inductive approach to data analysis (Schulz, 2012). The aim of the second stage of this study is to further identify a top ten list of characteristics via a quantitative survey which will ask students to choose from a list of key characteristics and then ask them to rank the top ten. The result s would then be published. What are your qualitative research question (s)? What elements set a highly effective traditional Catholic secondary school teacher apart from others? What characteristics do they have? What teaching methods do they use? What qualities/virtues do they possess? What behaviours do they exhibit? What spiritual qualities do they have? How do highly effective Catholic secondary school teachers promote learning and engagement? What are your quantitative research question (s)? Of all the characteristics identified, which are considered the most appealing? What are your mixed methods research question (s)? Which characteristics of highly effective traditional catholic teachers are considered most appealing and to what extent? Describe the qualitative data that you will collect and how you will analyse it. What methods will you use? Discuss some of the key aspects of the data that you wish to collect (e.g. Interview question, what you will look for in observations etc.). How many participants and how will you chose them (sampling) (approximately 300 words). I intend taking a focus group approach to the collection of qualitative data. Focus groups allow participants to interact with one another, share and compare and often generate new insights beyond what individual interviews can do (Carey, 2016; Pedersen et al., 2016). Furthermore, given the fact that participants will be secondary age students, I thought they would feel more comfortable with each other rather than in an individual interview alone with an adult. At the same time focus group interviews give the researcher an opportunity to hear the language of the participants and explore the topic in more depth (Pedersen et al., 2016). In regard to the number of participants I intend using what Onwuegbuzie Collins (2007) call cluster sampling. That is select focus groups consisting of 10 students from each of 10 different traditional Catholic schools so that qualitative data is collected from a total of 100 participants. It was thought advisable to select clusters of senior age students, that is, students in their last year of school only, for two reasons. Firstly, older students have years of experience behind them and are able to reflect on those experiences, and secondly, older students are more mature. In regard to key aspects of the data, it is hoped that semi-structured questions such as reflecting on your own personal learning experiences, what examples of effective catholic teaching have you seen? and what makes a traditional catholic teacher great? would elicit responses from the students which the researcher could then explore in more depth. Data would be collected via a voice recorder then from the recording a conversation analysis would be undertaken. This involves constructing a transcript of the interviews and then analysing the data using an inductive approach, that is finding themes/characteristics and recurring themes/characteristics which have emerged from the data (Schulz, 2012; Wikipedia, 2017b). These popular themes could then be used to develop a quantitative instrument for the next stage of the research project. A few factors would also need to be considered during data collection and analysis. For example, the schools, parents and students would have to give informed and voluntary consent and anonymity and confidentiality with regard to names of teachers and students, including the names of the schools would have to be respected (Tolich Davidson, 1999). Describe the quantitative data that you will collect and how you will analyse it. What methods will you use? Discuss some of the key variables that you will collect data on. How many participants and how will you chose them (sampling) (approximately 300 words). I intend collecting quantitative data from a survey which would be sent to the same students who had participated in the first stage of this research project, that is to say, those senior age secondary school students who had participated in the focus groups. The survey would consist of a list of variables, that is, the results of the qualitative conversation analysis and the key characteristics of highly effective catholic school teachers. To cite an example, one such variable might be the proposition that highly effective catholic secondary school teachers prepare well for their lessons. Others might be that highly effective catholic secondary school teachers have positive attitudes or that they are masters in their subject areas or that they include a spiritual dimension in their teaching. I consider it advisable to add one other variable and that is school location so the researcher can identify how many surveys from each school were returned. Other nominal demographic data such as gender, age etc. were considered unnecessary for this particular research project because the focus of the project is on a list of top ten characteristics of effective catholic secondary school teachers and not on gender or age differences. In regard to the administration of the survey itself, students would be invited to read the list of variables and choose a top ten. Students would be asked to rank the characteristic they felt most defined an effective catholic secondary school teacher as number one and so on and so forth until they reach number ten. Results would be entered into a Microsoft Excel spreadsheet and the top ten key characteristics would be determined by working out which characteristics had the 10 lowest total scores. Drawing on the work of Creswell Plano Clark (2011), draw a diagram of your mixed methods study.    PART TWO: CRITICAL REVIEW Using Onwuegbuzie and Poths meta themes, conduct a critical review of the following mixed methods article (500-750 words): Wyant, J.D., Jones, E.M. Bulgar, S.M. (2015). A mixed methods analysis of a single-course strategy to integrate technology into PETE. Journal of Teaching in Physical Education, 34, 131-151. A critical review has been conducted on Wyant, Jones, Bulgers (2015) above mentioned article using Onwuegbuzie and Poths (2016) meta-themes. Unless otherwise noted, all references are citations from this article. META-THEME #1 WARRANTEDNESS Many terms were introduced and defined however I assess that there were too many. In the introduction alone readers were introduced to the terms PETE, NASPE, ISTE, CBAM, Stages of Adoption, External Barriers, First-Order Barriers and TPACK. Whole articles have been devoted to explaining TPACK! Additionally, what are occupational socialization researchers? The reference list was comprehensive however there were quite a few errors. For example throug technology instead of through technology; the use of capitals in some of the titles Teachers Use of Educational Technology in U.S. Public Schools: 2009 (NCES 2010-040), US States missing Eugene: instead of Eugene, OR, the words Author Retrieved instead of just Retrieved and many recent journal articles were missing DOI numbers. In regard to the latter, was this the fault of the authors or were they simply unable to be found? Citations were inconsistent within the article, for example, three authors were consolidated to (Hall et al., 1979) and (Rochanasmita et al., 2009) while others with four were written out in full (Allan, Erickson, Brookhouse Johnson, 2010) and (Russell, Bebell, ODwyer OConnor, 2003). META-THEME #2 JUSTIFICATION The article appears to be underdeveloped. Although the authors used the TPACK model as a theoretical framework to explain the first theme, nowhere in the article do the authors explain this. They refer readers to the TPACK model in the introduction but that is all. The same could be said for First-Order Barriers which is used as a framework for the second theme. The authors advance qualitative/quantitative research/mixed methods research. They used a mixed methods design and quantitative and qualitative data collection procedures consisting of closed-ended survey instruments and weekly journal entries and end of course semi-structured interviews. The purpose of the study was cleared stated in the purpose statement paragraph The purpose of this study was to examine the influence of a domain-specific technology course META-THEME #3 WRITING QUALITY Although the authors made good use of headings and sub-headings some sections seemed out of order. For example Research Design followed Methods when it really should be the other way around, participants were mentioned twice and the introduction seemed too long. Furthermore the authors did not follow the structure recommended by Onwuegbuzie and Poth (2016) who advocate the following framework literature review, theoretical framework, rationale, purpose statement, research questions, hypothesis, and educational significance. The authors used a number of appropriate transitional words such as Accordingly, More specifically, For example, Typically, Further etc. However, the abstract repeats the purpose statement. META-THEME #4 TRANSPARENCY Information relating to sample size was mentioned specifically and clearly. Readers were directed to the fact that participants included 12 pre-service teachers and were recruited from a sample of 34 (male=24, female=10). However, information regarding the two participant sampling techniques could have been made clearer. For example, the authors could have clearly stated that the first participant sampling technique used wasà ¢Ã¢â€š ¬Ã‚ ¦Ãƒ ¢Ã¢â€š ¬Ã‚ ¦, the second participant sampling technique wasà ¢Ã¢â€š ¬Ã‚ ¦.etc. All tables and figures were referred to, for example, see Figure 1, see Figure 2, see Table 1 etc. However, in my opinion they were not explained sufficiently. To cite just one example on this point, Figure 1 was a visual model of the mixed methods design, however, the authors only referred readers to this visual diagram under Data Collection Procedures and Analysis and not under Research Design where it seems more appropriate to put it. Furthermore, readers were referred to the model as a timeline a timeline was created à ¢Ã¢â€š ¬Ã‚ ¦.. (see Figure 1), yet the caption clearly indicated that it was a Visual Model of Mixed Methods Research Design. Directions for future research were provided in the final concluding paragraph What this study further highlights is the need for scholars to devote greater attention to the research and dissemination of technology-related projects. META-THEME #5 INTEGRATION Figure 1, which consisted of a visual model of the mixed methods research design, was clear and visually appealing. However as said above, it should have been referred to and explained in detail under the heading Research Design. Measures were taken to ensure validity/legitimation. For example, the use of experienced social science researchers (plural) to review and legitimize research procedures, and interview transcriptions and journal entries were also distributed to participants to confirm the data accurately captured their feelings. META-THEME #6 PHILOSOPHICAL LENS The authors refer to research philosophies such as occupational socialization and constructivist-based learning, however, both could have been presented in a better way. For example, in regard to the former, the authors refer the readers to occupational socialization researchers, yet they do not clearly define this it but simply refer readers to it at the very end of the paragraph As with occupational socialization, research on learningà ¢Ã¢â€š ¬Ã‚ ¦. Shouldnt philosophies be clearly defined first so readers understand? REFERENCES Canter, L. (2014). Classroom management for academics success. Bloomington, IN: Solution Tree Press. Carey, M. A. (2016). Focus groups-what is the same, what is new, what is next? Qualitative Health Research, 26(6), 731-733. doi.org/10.1177/1049732316636848 Coles, E. K., Owens, J. S., Serrano, V. J., Slavec, J., Evans, S. W. (2015). From consultation to student outcomes: The role of teacher knowledge, skills, and beliefs in increasing integrity in classroom management strategies. School Mental Health, 7(1), 34-48. doi.org/10.1007/s12310-015-9143-2 Cooper, J. T., Hirn, R. G., Scott, T. M. (2015). Teacher as change agent: Considering instructional practice to prevent student failure. Preventing School Failure: Alternative Education for Children and Youth, 59(1), 1-4. doi.org/10.1080/1045988X.2014.919135 Creswell, J. W., Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed). Los Angeles, CA: SAGE Publications. Darling-Hammond, L., Youngs, P. (2002). Defining highly qualified teachers: What does scientifically-based research actually tell us? Educational Researcher, 31(9), 13-25. doi.org/10.3102/0013189X031009013 Duncan, A., Gurria, A., Leeuwen, F. (2011). Uncommon wisdom on teaching. Retrieved from www.huffingpost.com/arne-duncan/uncommon-wisdom-on-teachi_b_836541.html Goe, L., Stickler, L. M. (2008). Teacher quality and student achievement: Making the most of recent research. National Comprehensive Center for Teacher Quality. Retrieved from http://files.eric.ed.gov/fulltext/ED520769.pdf Hattie, J. (2003). Teachers make a difference: What is the research evidence? Presented at the Paper presented at the Australian Council for Educational Research Annual Conference on Building Teacher Quality, Melbourne, Australia. Retrieved from http://research.acer.edu.au/cgi/viewcontent.cgi?article=1003context=research_conference_2003 Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London, England: Routledge. Jimerson, S. R., Haddock, A. D. (2015). Understanding the importance of teachers in facilitating student success: Contemporary science, practice, and policy. School Psychology Quarterly, 30(4), 488-493. doi.org/10.1037/spq0000134 Leigh, A. (2010). Estimating teacher effectiveness from two-year changes in students test scores. Economics of Education Review, 29(3), 480-488. doi.org/10.1016/j.econedurev.2009.10.010 Onwuegbuzie, A. J., Collins, K. M. . (2007). A typology of mixed methods sampling designs in social science research. The Qualitative Report, 12(2), 281-316. Pedersen, B., Delmar, C., Falkmer, U., Grà ¸nkjaer, M. (2016). Bridging the gap between interviewer and interviewee: Developing an interview guide for individual interviews by means of a focus group. Scandinavian Journal of Caring Sciences, 30(3), 631-638. doi.org/10.1111/scs.12280 Reddy, L. A., Fabiano, G. A., Jimerson, S. R. (2013). Assessment of general education teachers Tier 1 classroom practices: Contemporary science, practice, and policy. School Psychology Quarterly, 28(4), 273-276. doi.org/10.1037/spq0000047 Schulz, J. (2012). Analysing your Interviews [Southampton Education School] [Video File]. Retrieved from https://www.youtube.com/watch?v=59GsjhPolPs Society of Saint Pius X. (2016). General statistics about the SSPX. Retrieved from http://sspx.org/en/general-statistics-about-sspx Spilt, J. L., Hughes, J. N., Wu, J.-Y., Kwok, O.-M. (2012). Dynamics of teacher-student relationships: Stability and change across elementary school and the influence on childrens academic success: Teacher-student relationship trajectories. Child Development, 83(4), 1180-1195. doi.org/10.1111/j.1467-8624.2012.01761.x Sutcliffe, C., P. (2011). Secondary students perceptions of teacher quality. Electronic Theses Dissertations. 391. Retrieved from www.digitalcommons.georgiasouthern.edu/etd/391 Tolich, M., Davidson, C. (1999). Starting fieldwork: An introduction to qualitative research in New Zealand. Auckland, N.Z: New York, NY: Oxford University Press. Wikipedia. (2017a). Conversation analysis. In Wikipedia, the free encyclopaedia. Retrieved from https://en.wikipedia.org/wiki/Conversation_analysis Wikipedia. (2017b). Traditionalist Catholic. In Wikipedia, the free encyclopaedia. Retrieved from https://en.wikipedia.org/wiki/Traditionalist_Catholic Wyant, J. D., Jones, E. M., Bulger, S. M. (2015). A mixed methods analysis of a single-course strategy to integrate technology into PETE. Journal of Teaching in Physical Education, 34(1), 131-151. doi.org/10.1123/jtpe.2013-0114