реферат бесплатно, курсовые работы
 

Types of tests used in English Language Teaching Bachelor Paper

2001; Hughes,1989) we can posit the idea that direct testing will be task-

oriented, effective and easy to manage if it tests such skills as writing

or speaking. It could be explained by the fact that the tasks intended to

check the skills mentioned above give us precise information about the

learners’ abilities. Moreover, we can maintain that when testing writing

the teacher demands the students to write a certain task, such as an essay,

a composition or reproduction, and it will be precisely the point the

teacher will be intended to check. There will be certain demands imposed on

writing test; the teacher might be just interested in the students’ ability

to produce the right layout of an essay without taking grammar into

account, or, on the contrary, will be more concerned with grammatical and

syntactical structures. What concerns testing speaking skills, here the

author of the paper does not support the idea promoted by Bynom that it

could be treated as direct testing. Definitely, you will have a certain

task to involve your speaking skills; however, speaking is not possible

without employment of listening skills. This in turn will generate the idea

that apart from speaking skills the teacher will test the students’ ability

to understand the speech s/he hears, thus involving speaking skills.

It is said that the advantages of direct testing is that it is

intended to test some certain abilities, and preparation for that usually

involves persistent practice of certain skills. Nevertheless, the skills

tested are deprived from the authentic situation that later may cause

difficulties for the students in using them.

Now we can shift to another notion - indirect testing. It differs from

direct one in the way that it measures a skill through some other skill. It

could mean the incorporation of various skills that are connected with each

other, e.g. listening and speaking skills.

Indirect testing, regarding to Hughes, tests the usage of the language

in real-life situation. Moreover, it suits all situations; whereas direct

testing is bound to certain tasks intended to check a certain skill. Hughes

(ibid.) assumes that indirect testing is more effective than direct one,

for it covers a broader part of the language. It denotes that the learners

are not constrained to one particular skill and a relevant exercise. They

are free to elaborate all four skills; what is checked is their ability to

operate with those skills and apply them in various, even unpredictable

situations. This is the true indicator of the learner’s real knowledge of

the language.

Indirect testing has more advantages that disadvantages, although the

only drawback according to Hughes is that such type of testing is difficult

to evaluate. It could be frustrating what to check and how to check;

whether grammar should be evaluated higher, than composition structure or

vice versa. The author of the paper agrees with that, however, basing on

her experience at school again, she must claim that it is not so easy to

apply indirect testing. This could be rather time-consuming, for it is a

well-known fact that the duration of the class is just forty minutes;

moreover, it is rather complicated to construct indirect test – it demands

a lot of work, but our teachers are usually overloaded with a variety of

other duties. Thus, we can only hope on the course books that supply us

with a variety of activities that involve cooperation of all four skills.

4.2 Discrete point and integrative testing

Having discussed the kinds of testing that deal with general aspects,

such as certain skills and variety of skills in cooperation, we can come to

the more detailed types as discrete point and integrative testing.

According to Longman Dictionary of LTAL (112), discrete point test is a

language test that is meant to test a particular language item, e.g.

tenses. The basis of that type of tests is that we can test components of

the language (grammar, vocabulary, pronunciation, and spelling) and

language skills (listening, reading, speaking, and writing) separately. We

can declare that discrete point test is a common test used by the teachers

in our schools. Having studied a grammar topic or new vocabulary, having

practiced it a great deal, the teacher basically gives a test based on the

covered material. This test usually includes the items that were studied

and will never display anything else from a far different field. The same

will concern the language skills; if the teacher’ aim is to check reading

skills; the other skills will be neglected. The author of the paper had

used such types of tests herself, especially after a definite grammar topic

was studied. She had to construct the tests herself basing on the examples

displayed in various grammar books. It was usually gap-filling exercises,

multiple choice items or cloze tests. Sometimes a creative work was

offered, where the students had to write a story involving a certain

grammar theme that was being checked. According to her observance, the

students who studied hard were able to complete them successfully, though

there were the cases when the students failed. Now having discussed the

theory on validity, reliability and types of testing, it is even more

difficult to realize who was really to blame for the test failures: either

the tests were wrongly designed or there was a problem in teaching.

Notwithstanding, this type was and still remains to be the most general and

acceptable type in schools of our country, for it is easy to design, it

concerns a certain aspect of the language and is easy to score. If we speak

about types of tests we can say that this way of testing refers more to a

progress test (You can see the examples of such type of test in Appendix

2).

Nevertheless, according to Bynom (2001:8) there is a certain drawback

of discrete point testing, for it tests only separated parts, but does not

show us the whole language. It is true, if our aim is to incorporate the

whole language. Though, if we are to check the exact material the students

were supposed to learn, then why not use it.

Discussing further, we have come to integrative tests. According to

Longman Dictionary of LTAL, the integrative test intends to check several

language skills and language components together or simultaneously. Hughes

(1989:15) stipulates that the integrative tests display the learners’

knowledge of grammar, vocabulary, spelling together, but not as separate

skills or items.

Alderson (1996:219) poses that, by and large, most teachers prefer

using integrative testing to discrete point type. He explains the fact that

basically the teachers either have no enough of spare time to check a

certain split item being tested or the purpose of the test is only

considered to view the whole material. Moreover, some language skills such

as reading do not require the precise investigation of the students’

abilities whether they can cope with definite fragments of the text or not.

We can render the prior statements as the idea that the teachers are mostly

concerned with general language knowledge, but not with bits and pieces of

it. The separate items usually are not capable of showing the real state of

the students’ knowledge. What concerns the author of the paper, she finds

integrative testing very useful, though more habitual one she believes to

be discrete point test. She assumes that the teacher should incorporate

both types of testing for effective evaluation of the students’ true

language abilities.

4.3 Criterion-referenced and norm referenced testing

The next types of testing to be discussed are criterion-referenced and

norm referenced testing. They are not focused directly on the language

items, but on the scores the students can get. Again we should concern

Longman Dictionary of LTAL (17) that states that criterion-referenced test

measures the knowledge of the students according to set standards or

criteria. This means that there will be certain criteria according to which

the students will be assessed. There will be various criteria for different

levels of the students’ language knowledge. Here the aim of testing is not

to compare the results of the students. It is connected with the learners’

knowledge of the subject. As Hughes (1989:16) puts it the criterion-

referenced tests check the actual language abilities of the students. They

distinguish the weak and strong points of the students. The students either

manage to pass the test or fail it. However, they never feel better or

worse than their classmates, for the progress is focused and checked. At

this point we can speak about the centralized exams at the end of the

twelfth and ninth form. As far as the author of the paper is concerned, the

results of the exams are confident, and the learners after passing the

exams are conferred with various levels relevant to their language ability.

Apart from that, once a year in Latvian schools the students are given

tests designed by the officials of the Ministry of Education to check the

level of the students and, what is most important, the work of the teacher.

They call them diagnostic tests, though according to the material discussed

above it is rather arguable. Nevertheless, we can accept the fact that

criterion-referenced testing could be used in the form of diagnostic tests.

Advancing further, we have come to norm-referenced test that measures

the knowledge of the learner and compares it with the knowledge of another

member of his/her group. The learner’s score is compared with the scores of

the other students. According to Hughes (ibid.), this type of test does not

show us what exactly the student knows. Therefore, we presume that the best

test format for the following type of testing could be a placement test,

for it concerns the students’ placement and division according to their

knowledge of the foreign language. There the score is vital, as well.

4.4 Objective and subjective testing

It worth mentioning that apart from scoring and testing the learners’

abilities another essential role could be devoted to indirect factors that

influence evaluating. These are objective and subjective issues in testing.

According to Hughes (1989:19), the difference between these two types is

the way of scoring and presence or absence of the examiner’s judgement. If

there is not any judgement, the test is objective. On the contrary, the

subjective test involves personal judgement of the examiner. The author of

the paper sees it as when testing the students objectively, the teacher

usually checks just the knowledge of the topic. Whereas, testing

subjectively could imply the teacher’s ideas and judgements. This could be

encountered during speaking test where the student can produce either

positive or negative impression on the teacher. Moreover, the teacher’s

impression and his/her knowledge of the students’ true abilities can

seriously influence assessing process. For example, the student has failed

the test; however, the teacher knows the true abilities of the student and,

therefore, s/he will assess the work of that student differently taking all

the factors into account.

4.5 Communicative language testing

Referring to Bynom (ibid.), this type of testing has become popular

since 1970-80s. It involves the knowledge of grammar and how it could be

applied in written and oral language; the knowledge when to speak and what

to say in an appropriate situation; knowledge of verbal and non-verbal

communication. All these types of knowledge should be successfully used in

a situation. It bases on the functional use of the language. Moreover,

communicative language testing helps the learners feel themselves in real-

life situation and acquire the relevant language.

Weir (1990:7) stipulates that the current type of testing tests

exactly the “performance” of communication. Further, he develops the idea

of “competence” due to the fact that an individual usually acts in a

variety of situations. Afterwards, reconsidering Bachman’s idea he comes

with another notion – ‘communicative language ability’.

Weir (1990:10-11) assumes that in order to work out a good

communicative language test we have to bear in mind the issue of precision:

both the skills and performance should be accurate. Besides, their

collaboration is vital for the students’ placement in the so-called ‘real

life situation’. However, without a context the communicative language test

would not function. The context should be as closer to the real life as

possible. It is required in order to help the student feel him/herself in

the natural environment. Furthermore, Weir (ibid.) stresses that language

‘fades’ if deprived of the context.

Weir (ibid., p.11) says: “to measure language proficiency adequately

in each situation, account must be taken of: where, when, how, with whom,

and why the language is to be used, and on what topics, and with what

effect.” Moreover, Weirs (ibid.) emphasises the crucial role of the

schemata (prior knowledge) in the communicative language tests.

The tasks used in the communicative language testing should be

authentic and ‘direct’ in order the student will be able to perform as it

is done in everyday life.

According to Weir (ibid.), the students have to be ready to speak in

any situation; they have to be ready to discuss some topics in groups and

be able to overcome difficulties met in the natural environment. Therefore,

the tests of this type are never simplified, but are given as they could be

encountered in the surroundings of the native speaker. Moreover, the

student has to possess some communicative skills, that is how to behave in

a certain situation, how to apply body language, etc.

Finally, we can repeat that communicative language testing involves

the learner’s ability to operate with the language s/he knows and apply it

in a certain situation s/he is placed in. S/he should be capable of

behaving in real-life situation with confidence and be ready to supply the

information required by a certain situation. Thereof, we can speak about

communicative language testing as a testing of the student’s ability to

behave him/herself, as he or she would do in everyday life. We evaluate

their performance.

To conclude we will repeat that there are different types testing used

in the language teaching: discreet point and integrative testing, direct

and indirect testing, etc. All of them are vital for testing the students.

Chapter 5

Testing the Language Skills

In this chapter we will attempt to examine the various elements or

formats of tests that could be applied for testing of four language skills:

reading, listening, writing and speaking. First, we will look at multiple-

choice tests, after that we will come to cloze tests and gap filling, then

to dictations and so on. Ultimately, we will attempt to draw a parallel

between them and the skills they could be used for.

5.1 Multiple choice tests

It is not surprising why we have started exactly with multiple-choice

tests (MCQs, further in the text). To the author’s concern these tests are

widely used by teachers in their teaching practice, and, moreover, are

favoured by the students (Here the author has been supported by the

equivalent idea of Alderson (1996:222)). Heaton (1990:79) believes that

multiple-choice questions are basically employed to test vocabulary.

However, we can argue with the statement, for the multiple choice tests

could be successfully used for testing grammar, as well as for testing

listening or reading skills.

It is a well-known fact how a multiple-choice test looks like:

1. ---- not until the invention of the camera that artists

correctly painted horses racing.

A) There was

B) It was

C) There

D) It

“Cambridge Preparation for the TOEFL Test”:

A task basically is represented by a number of sentences, which should

be provided with the right variant, that, in its turn, is usually given

below. Furthermore, apart from the right variant the students are offered a

set of distractors, which are normally introduced in order to “deceive” the

learner. If the student knows the material that is being tested, s/he will

spot the right variant, supply it and successfully accomplish the task. The

distractors, or wrong words, basically slightly differ from the correct

variant and sometimes are even funny. Nevertheless, very often they could

be represented by the synonyms of the correct answer whose differences are

known to those who encounter the language more frequently as their job or

study field. In that case they could be hardly differentiated, and the

students are frustrated. Certainly, the following cases could be implied

when teaching vocabulary, and, consequently, will demand the students’

ability to use the right synonym. The author of the paper had given the

multiple-choice tests to her students and must confess that despite

difficulties in preparing them, the students found them easier to do. They

motivated their favour for them as it was rather convenient for them to

find the right variant, definitely if they knew what to look for. We

presume that such test format as if motivated the learners and supplied

them additional support that they were deprived during the test where

nobody could hope for the teacher’s help.

Everything mentioned above has raised the author’s interest in the

theory on multiple-choice test format and, therefore, she finds extremely

useful the following list of advantages and disadvantages generated by

Weir. He (1990:43) lists four advantages and six disadvantages of the

multiple-choice questions test. Let us look at the advantages first:

. According to Weir, the multiple-choice questions are structured in

such a form that there is no possibility for the teacher or as he

places “marker” to apply his/her personal attitude to the marking

process.

The author of the paper finds it to be very significant, for employing

the test of this format we see only what the student knows or does not

know; the teacher cannot raise or lower the marker basing on the students’

additional ideas displayed in the work. Furthermore, the teacher, though

knowing the strong and weak points of his/her students, cannot apply this

information as well to influence the mark. What s/he gets are the pure

facts of the students’ knowledge.

Another advantage is:

. The usage of pre-test that could be helpful for stating the level of

difficulty of the examples and the test in the whole. That will

reduce the probability of the test being inadequate or too

complicated both for completing and marking.

This could mean that the teacher can ensure his/her students and

him/herself against failures. For this purposes s/he just has to test the

Страницы: 1, 2, 3, 4, 5, 6, 7


ИНТЕРЕСНОЕ



© 2009 Все права защищены.