реферат бесплатно, курсовые работы
 

Types of tests used in English Language Teaching Bachelor Paper

learnt by tremendous efforts, but not elaboration of the exact actual

knowledge of the student (that, unfortunately, does not exist at all).

Moreover, there could be even more disastrous case when the student has

cheated and used his/her neighbour’s work. Apart from the above-mentioned

there could be other factors that could influence an inadequate completion

of the test (sleepless night, various personal and health problems, etc.)

However, very often the test itself can provoke the failure of the

students to complete it. With the respect to the linguists, such as Hughes

(1989) and Alderson (1996), we are able to state that there are two main

causes of the test being inaccurate:

. Test content and techniques;

. Lack of reliability.

The first one means that the test’s design should response to what is

being tested. First, the test must content the exact material that is to be

tested. Second, the activities, or techniques, used in the test should be

adequate and relevant to what is being tested. This denotes they should not

frustrate the learners, but, on the contrary, facilitate and help the

students write the test successfully.

The next one denotes that one and the same test given at a different time

must score the same points. The results should not be different because of

the shift in time. For example, the test cannot be called reliable if the

score gathered during the first time the test was completed by the students

differs from that administered for the second time, though knowledge of the

learners has not changed at all. Furthermore, reliability can fail due to

the improper design of a test (unclear instructions and questions, etc.)

and due to the ways it is scored. The teacher may evaluate various students

differently taking different aspects into consideration (level of the

students, participation, effort, and even personal preferences.) If there

are two markers, then definitely there will be two different evaluations,

for each marker will possess his/her own criteria of marking and evaluating

one and the same work. For example, let us mention testing speaking skills.

Here one of the makers will probably treat grammar as the most significant

point to be evaluated, whereas the other will emphasise the fluency more.

Sometimes this could lead to the arguments between the makers;

nevertheless, we should never forget that still the main figure we have to

deal with is the student.

2.2. Validity

Now we can come to one of the important aspects of testing – validity.

Concerning Hughes, every test should be reliable as well as valid. Both

notions are very crucial elements of testing. However, according to Moss

(1994) there can be validity without reliability, or sometimes the border

between these two notions can just blur. Although, apart from those

elements, a good test should be efficient as well.

According to Bynom (Forum, 2001), validity deals with what is tested and

degree to which a test measures what is supposed to measure (Longman

Dictionary, LTAL). For example, if we test the students writing skills

giving them a composition test on Ways of Cooking, we cannot denote such

test as valid, for it can be argued that it tests not our abilities to

write, but the knowledge of cooking as a skill. Definitely, it is very

difficult to design a proper test with a good validity, therefore, the

author of the paper believes that it is very essential for the teacher to

know and understand what validity really is.

Regarding Weir (1990:22), there are five types of validity:

. Construct validity;

. Content validity

. Face validity

. Wash back validity;

. Criterion-related validity.

Weir (ibid.) states that construct validity is a theoretical concept that

involves other types of validity. Further, quoting Cronbach (1971), Weird

writes that to construct or plan a test you should research into testee’s

behaviour and mental organisation. It is the ground on which the test is

based; it is the starting point for a constructing of test tasks. In

addition, Weird displays the Kelly’s idea (1978) that test design requires

some theory, even if it is indirect exposure to it. Moreover, being able to

define the theoretical construct at the beginning of the test design, we

will be able to use it when dealing with the results of the test. The

author of the paper assumes that appropriately constructed at the

beginning, the test will not provoke any difficulties in its administration

and scoring later.

Another type of validity is content validity. Weir (ibid.) implies the

idea that content validity and construct one are closely bound and

sometimes even overlap with each other. Speaking about content validity, we

should emphasise that it is inevitable element of a good test. What is

meant is that usually duration of the classes or test time is rather

limited, and if we teach a rather broad topic such as “computers”, we

cannot design a test that would cover all the aspects of the following

topic. Therefore, to check the students’ knowledge we have to choose what

was taught: whether it was a specific vocabulary or various texts connected

with the topic, for it is impossible to test the whole material. The

teacher should not pick up tricky pieces that either were only mentioned

once or were not discussed in the classroom at all, though belonging to the

topic. S/he should not forget that the test is not a punishment or an

opportunity for the teacher to show the students that they are less clever.

Hence, we can state that content validity is closely connected with a

definite item that was taught and is supposed to be tested.

Face validity, according to Weir (ibid.), is not theory or samples

design. It is how the examinees and administration staff see the test:

whether it is construct and content valid or not. This will definitely

include debates and discussions about a test; it will involve the teachers’

cooperation and exchange of their ideas and experience.

Another type of validity to be discussed is wash back validity or

backwash. According to Hughes (1989:1) backwash is the effect of testing on

teaching and learning process. It could be both negative and positive.

Hughes believes that if the test is considered to be a significant element,

then preparation to it will occupy the most of the time and other teaching

and learning activities will be ignored. As the author of the paper is

concerned this is already a habitual situation in the schools of our

country, for our teachers are faced with the centralised exams and

everything they have to do is to prepare their students to them. Thus, the

teacher starts concentrating purely on the material that could be

encountered in the exam papers alluding to the examples taken from the past

exams. Therefore, numerous interesting activities are left behind; the

teachers are concerned just with the result and forget about different

techniques that could be introduced and later used by their students to

make the process of dealing with the exam tasks easier, such as guessing

form the context, applying schemata, etc.

The problem arises here when the objectives of the course done during the

study year differ from the objectives of the test. As a result we will have

a negative backwash, e.g. the students were taught to write a review of a

film, but during the test they are asked to write a letter of complaint.

However, unfortunately, the teacher has not planned and taught that.

Often a negative backwash may be caused by inappropriate test design.

Hughes further in his book speaks about multiple-choice activities that are

designed to check writing skills of the students. The author of the paper

is very confused by that, for it is unimaginable how writing an essay could

be tested with the help of multiple choices. Testing essay the teacher

first of all is interested in the students’ ability to apply their ideas in

writing, how it has been done, what language has been used, whether the

ideas are supported and discussed, etc. At this point multiple-choice

technique is highly inappropriate.

Notwithstanding, according to Hughes apart form negative side of the

backwash there is the positive backwash as well. It could be the creation

of an entirely new course designed especially for the students to make them

pass their final exams. The test given in a form of final exams imposes the

teacher to re-organise the course, choose appropriate books and activities

to achieve the set goal: pass the exam. Further, he emphasises the

importance of partnership between teaching and testing. Teaching should

meet the needs of testing. It could be understand in the following way that

teaching should correspond the demands of the test. However, it is a rather

complicated work, for according to the knowledge of the author of the paper

the teachers in our schools are not supplied with specially designed

materials that could assist them in their preparation the students to the

exams. The teachers are just given vague instructions and are free to act

on their own.

The last type that could be discussed is criterion-related validity. Weir

(1990:22.) assumes that it is connected with test scores link between two

different performances of the same test: either older established test or

future criterion performance. The author of the paper considers that this

type of validity is closely connected with criterion and evaluation the

teacher uses to assess the test. It could mean that the teacher has to work

out definite evaluation system and, moreover, should explain what she finds

important and worth evaluating and why. Usually the teachers design their

own system; often these are points that the students can obtain fulfilling

a certain task. Later the points are gathered and counted for the mark to

be put. Furthermore, the teacher can have a special table with points and

relevant marks. According to our knowledge, the language teachers decide on

the criteria together during a special meeting devoted to that topic, and

later they keep to it for the whole study year. Moreover, the teachers are

supposed to make his/her students acquainted with their evaluation system

for the students to be aware what they are expected to do.

3. Reliability

According to Bynom (Forum, 2001) reliability shows that the test’s

results will be similar and will not change if one and the same test will

be given on various days. The author of the paper is of the same mind with

Bynom and presumes the reliability to be the one of the key elements of a

good test in general. For, as it has been already discussed before, the

essence of reliability is that when the students’ scores for one and the

same test, though given at different periods of time and with a rather

extended interval, will be approximately the same. It will not only display

the idea that the test is well organized, but will denote that the students

have acquired the new material well.

A reliable test, according to Bynom, will contain well-formulated tasks

and not indefinite questions; the student will know what exactly should be

done. The test will always present ready examples at the beginning of each

task to clarify what should be done. The students will not be frustrated

and will know exactly what they are asked to perform. However, judging form

the personal experience, the author of the paper has to admit, that even

such hints may confuse the students; they may fail to understand the

requirements and, consequently, fail to complete the task correctly. This

could be explained by the fact that the students are very often

inattentive, lack patience and try to accomplish the test quickly without

bothering to double check it.

Further, regarding to Heaton (1990:13), who states that the test could be

unreliable if the two different markers mark it, we can add that this

factor should be accepted, as well. For example, one representative of

marking team could be rather lenient and have different demands and

requirements, but the other one could appear to be too strict and would pay

attention to any detail. Thus, we can come to another important factor

influencing the reliability that is marker’s comparison of examinees’

answers. Moreover, we have to admit a rather sad fact but not the

exceptional one that the maker’s personal attitude towards the testee could

impact his/her evaluation. No one has to exclude various home or health

problems the marker can encounter at that moment, as well.

To summarize, we can say that for a good test possessing validity and

reliability is not enough. The test should be practical, or in other words,

efficient. It should be easily understood by the examinee, ease scored and

administered, and, certainly, rather cheap. It should not last for

eternity, for both examiner and examinee could become tired during five

hours non-stop testing process. Moreover, testing the students the teachers

should be aware of the fact that together with checking their knowledge the

test can influence the students negatively. Therefore, the teachers ought

to design such a test that it could encourage the students, but not to make

them reassure in their own abilities. The test should be a friend, not an

enemy. Thus, the issue of validity and reliability is very essential in

creating a good test. The test should measure what it is supposed to

measure, but not the knowledge beyond the students’ abilities. Moreover,

the test will be a true indicator whether the learning process and the

teacher’s work is effective.

Chapter 3

Types of tests

Different scholars (Alderson, 1996; Heaton, 1990; Underhill, 1991) in

their researches ask the similar question – why test, do the teachers

really need them and for what purpose. Further, they all agree that test is

not the teacher’s desire to catch the students unprepared with what they

are not acquainted; it is also not the motivating factor for the students

to study. In fact, the test is a request for information and possibility to

learn what the teachers did not know about their students before. We can

add here that the test is important for the students, too, though they are

unaware of that. The test is supposed to display not only the students’

weak points, but also their strong sides. It could act as an indicator of

progress the student is gradually making learning the language. Moreover,

we can cite the idea of Hughes (1989:5) who emphasises that we can check

the progress, general or specific knowledge of the students, etc. This

claim will directly lead us to the statement that for each of these

purposes there is a special type of testing. According to some scholars

(Thompson, 2001; Hughes, 1989; Alderson, 1996; Heaton, 1990; Underhill,

1991), there are four traditional categories or types of tests: proficiency

tests, achievement tests, diagnostic tests, and placement tests. The author

of the paper, once being a teacher, can claim that she is acquainted with

three of them and has frequently used them in her teaching practice.

In the following sub-chapters we are determined to discuss different

types of tests and if possible to apply our own experience in using them.

3.1. Diagnostic tests

It is wise to start our discussion with that type of testing, for it

is typically the first step each teacher, even non-language teacher, takes

at the beginning of a new school year. In the establishment the author of

the paper was working it was one of the main rules to start a new study

year giving the students a diagnostic test. Every year the administration

of the school had stemmed a special plan where every teacher was supposed

to write when and how they were going to test their students. Moreover, the

teachers were supposed to analyse the diagnostic tests, complete special

documents and provide diagrams with the results of each class or group if a

class was divided. Then, at the end of the study year the teachers were

demanded to compare the results of them with the final, achievement test

(see in Appendix 1). The author of the paper has used this type of test for

several times, but had never gone deep into details how it is constructed,

why and what for. Therefore, the facts listed below were of great value for

her.

Referring to Longman Dictionary of LTAL (106) diagnostic tests is a

test that is meant to display what the student knows and what s/he does not

know. The dictionary gives an example of testing the learners’

pronunciation of English sounds. Moreover, the test can check the students’

knowledge before starting a particular course. Hughes (1989:6) adds that

diagnostic tests are supposed to spot the students’ weak and strong points.

Heaton (1990:13) compares such type of test with a diagnosis of a patient,

and the teacher with a doctor who states the diagnosis. Underhill

(1991:14.) adds that a diagnostic test provides the student with a variety

of language elements, which will help the teacher to determine what the

student knows or does not know. We believe that the teacher will

intentionally include the material that either is presumed to be taught by

a syllabus or could be a starting point for a course without the knowledge

of which the further work is not possible. Thus, we fully agree with the

Heaton’s comparison where he contrasts the test with a patient’s diagnosis.

The diagnostic test displays the teacher a situation of the students’

current knowledge. This is very essential especially when the students

return from their summer holidays (that produces a rather substantial gap

in their knowledge) or if the students start a new course and the teacher

is completely unfamiliar with the level of the group. Hence, the teacher

has to consider carefully about the items s/he is interested in to teach.

This consideration reflects Heaton’s proposal (ibid.), which stipulates

that the teachers should be systematic to design the tasks that are

supposed to illustrate the students’ abilities, and they should know what

exactly they are testing. Moreover, Underhill (ibid.) points out that apart

from the above-mentioned the most essential element of the diagnostic test

is that the students should not feel depressed when the test is completed.

Therefore, very often the teachers do not put any marks for the diagnostic

test and sometimes even do not show the test to the learners if the

students do not ask the teacher to return it. Nevertheless, regarding our

own experience, the learners, especially the young ones, are eager to know

their results and even demand marks for their work. Notwithstanding, it is

up to the teacher whether to inform his/her students with the results or

not; however, the test represents a valuable information mostly for the

teacher and his/her plans for designing a syllabus.

Returning to Hughes (ibid.) we can emphasise his belief that this

type of test is very useful for individual check. It means that this test

Страницы: 1, 2, 3, 4, 5, 6, 7


ИНТЕРЕСНОЕ



© 2009 Все права защищены.