THE RELIABILITY OF COMPUTER – BASED LANGUAGE TESTING
THE RELIABILITY OF COMPUTER – BASED LANGUAGE
TESTING: A
STUDY OF ND I AND HND I ENGLISH LANGUAGE EXAMINATION IN
DELTA STATE
POLYTECHNIC, OGWASHI – UKU.
Courtesy: Chiedu, Rosemary Ebelechukwu,
Dept. of Languages, Delta State Polytechnic,
Ogwuashi-Uku.
Abstract
This
paper sets out to examine how reliable Computer-based language testing is in
examining students’ level of proficiency in English Language. The reliability of
a language test is concerned with the consistency of scoring and the accuracy
of the administering procedures of the test. Twenty lecturers in the Department
of Languages in Delta State Polytechnic, 0gwashi-Uku were interviewed to find
out their opinions on the appropriateness and efficiency of computer-based
examination in English Language ( General Studies] courses as presently practiced
with ND I and HND I level students in the polytechnic. Also, the advantages and
disadvantages (drawbacks) of computer-based language tests were examined. Then,
suggestions and recommendations were proffered on how the tests can be improved
in terms of reliability to enhance students’ proficiency in English Language
which is the essence of studying the language at these levels.
Keywords: Computer-based tests, language testing, test
reliability, language proficiency.
INTRODUCTION
THE CONCEPT OF LANGUAGE TESTING
Wikipedia
(2012) defines language testing as a field of study under the umbrella of Applied
Linguistics; its main focus is the assessment of first, second or other language
in the school, college or university context, assessment of language in
immigration, citizenship and asylum contexts. Shohamy and Hornberger (2008)
explain that tests are frequently used as the measurement instrument designed
to elicit specific behaviour directly or indirectly. This reaffirms that a
language test is a sample of language behaviour of the learner whose language
ability is examined through test administration. Perhaps, the most common use
of language tests is to discover the strengths and weakness in the learned
abilities of the students as language testing involves the practice of
evaluating the proficiency of an individual in the use of a particular language
that has been learned effectively.
In
addition, Allen (2009) states that the purpose of testing is to determine and
discriminate a person's ability from that of others He sees "language
testing as the practice and study of evaluating the proficiency of an
individual in using a particular language effectively”. However Farhady and
Keramati (1996:191) are of the opinion that "tests are applied to make
decisions about people's lives. Therefore, fair decisions will be impossible if
tests do not provide accurate information”. From the foregoing, it is
important that tests should have the characteristics of reliability, validity,
practicality and fairness but the main focus of this paper is the reliability
of computer-based language testing as presently adopted by the management in
examining students in ND I and HND I in English language (General Studies)
courses.
THEORETICAL FRAMEWORK
The theories of Dell Hymes' Communicative Competence and Chomsky’s Generative Grammar propounded in 1963 forms the theoretical framework of the study. This is because the notions of competence and performance are essential in understanding computer-based language testing. In Generative Grammar theory, Chomsky states that competence is the ability or capacity to generate an infinite number of sentences from a limited set of grammatical rules. This view posits that competence is logically prior to performance and is the generative basis for further learning. Chomsky, therefore, sees the linguistic function as basic before the communication function of language as a learner masters the linguistic aspects of a particular language before he can perform [communicate] in the language. On the other hand, Dell Hymes sees the above theory as grossly inadequate by stating that the communicative function of language supercedes the linguistic function of language. To him, communicative competence refers to a language user's knowledge of syntax, morphology, phonology as well as social knowledge about how and when to use utterances appropriately. Computer-based language tests are conducted to test students linguistic and communicative competence to enable them function effectively in the school, workplace and any other situation where the use of English is required as previously defined by Wikipedia.
THE CONCEPT OF COMPUTER-BASED LANGUAGE TESTING
Educators have started to benefit from modern computer technology to carry out accurate and efficient assessment of learning outcomes in secondary and higher education in Nigeria as a result of the introduction of digital revolution. In recent years in Nigeria, institutions of higher education especially the National Open University of Nigeria (NOUN) have started integrating e-learning (or computer learning) and assessment initiative into their undergraduate programmes.
Computer-based testing, according to Chapelle (2001) refers to making use of computers while preparing questions, administering examinations and scoring them. leong (2014), Linden (2002), Wang (2010) stated that with the advent of using computers as testing devices since the 19805, a different point of view has been gained so that more authentic, cost saving, scrutinized and controlled testing environment can be achieved compared to traditional paper and pencil based one. Folk and Smith (2002) agreed that computer-based testing which started in the late 19703 or early 19805 was always thought as an alternative to paper based testing.
Many different computer-based test delivery modes have emerged since the beginning of using computers as testing tools namely Computer Adaptive Testing (CAT), Linear On the Fly Testing (LOFT) or Computerized Fixed Tests (CFT), Computerized Mastery Testing (CMT) (Ward, 2002: 38) and Automated Test Assembly (ATA) (Parshall et al, 2002: 11). Computer Adaptive Testing (CAT) is purely performance or individual based testing as the more a candidate answers questions correctly, the more challenging questions appear on the screen and vice versa. On the contrary, LOFT or CFT has fixed time and test length for all test takers. Parshall et al (2002) opine that exam security is the main goal in LOFT rather than having psychometric values as in CAT. On the other hand, ATA (Automated Test Assembly) chooses items from an item pool according to the test plan and makes use of content and statistical knowledge in addition to having fixed time.
COMPUTER-BASED LANGUAGE TESTING IN DELTA STATE POLYTECHNIC, OGWASHI‘UKU.
Computer-based testing for all courses in the various departments in National Diploma I and Higher National Diploma I levels was introduced by the Academic Board of the school under the chairmanship of the rector, Professor (Mrs) S. C. Chiemeke (a Professor of Computer Science) during the first semester 2017/2018 academic session. The decision to use this medium of assessment was basically to curb examination malpractices by lecturers who extort money from students for marks and to speed up result processing which is relatively faster with computers. Incidentally, English Language courses (GNS) were among the courses affected by this development because they are mandatory courses for ND 1 and HND 1 students in virtually all the departments in the school. The perceived advantages of this new innovation are firstly; computer-based tests are totally objective in nature. It is therefore, fair to all the students and do not tend to favour the verbally fluent and fast writers or those with good handwriting unlike in essay questions (subjective tests) where the student may pick unnecessary points if he knows how to present it in good handwriting and also in a logical and convincing format. Also, computer-based language test supplies immediate feedback and scoring which has significant impact over pedagogy as test takers can grasp their mistakes when immediate feedback is given upon the completion of the test. This eases teachers' workload of scoring exam papers and generally, it is practically impossible to give enough feedback about each student's mistakes and even if this is possible, it comes late that students do not remember the questions or their answers. In addition, good test items (questions) can be used and reused as they can be stored in a question bank for future use. Moreover, Billing (1973: 131-148] points out that "in objective tests, scoring is objective and reliable, the name objective implies that everyone who scores the response to the item will arrive at the same mark”. Hence, the reliability of the measure is enhanced. Another important strength of computer-based objective test is its adequacy at sampling the subject matter and instructional objectives of the course on which the tests are based. The large number of questions set enables an adequate sampling of the student's actual ability because the knowledge that it samples a wide variety of questions encourages students to read wide and broad. This eventually increases the reliability and the validity of the test. Finally, computer-based language testing enables the examiners to collect data about the outcome of the test such as how many questions have been answered correctly, how many have been answered wrongly and how many minutes have been spent on each questions which Parshall et al, 2002: 2 called response latency. However, the drawbacks of computer-based language testing as observed by English Language lecturers are as follows;
1. Computer-based language tests are speed tests where students are allowed to answer seventy (70) objective questions in forty-five (45) minutes. When this time elapses, the computer system shuts down automatically. As a result of this, it has been observed that some students who cannot finish answering the test items at the stipulated time are logged out by the system. Most of the time, the invigilators have to seek for the assistance of the technicians assigned to the ICT centre to log the students back in again, so that they can sign out properly. This has been a major setback over the three years computers have been used for examination in the school.
2. All the twenty language lecturers in the Department of Languages interviewed criticized the use of computers to test language courses because the students are not encouraged to improve their writing skills. The students are not able to express themselves in English Language as it is not possible to use objective tests to examine students in essay writing and composition.
3. Examination malpractice is still prevalent in computer-based language testing. Students sometimes impersonate to write the examination for their friends once they get hold of the student’s pin number which is used to log into the system. If the invigilator in the hall is not careful enough to scrutinize the person seated and the passport photograph on the screen, impersonation has taken place. In last semester exam, (2018/2019, second semester) more than three students were caught trying to do this in one day.
4. It is a known fact that cheating is easier in objective tests than subjective tests. A student can easily assist the next person sitting beside him by simply calling out the options (A or B or C).
5. By using computer-based testing to examine language courses, the full potentials of students are not harnessed because they are not given credit for partial answers; neither do they have opportunity to express their ideas on a given question. This is why Murayama (2003:1) notes that “in objective test, students are encouraged towards rote learning”. Rate learning kills creativity and productivity in students.
6. All languages lecturers interviewed expressed their disdain towards computer- based language testing because objective tests are difficult to set based on the fact that language courses have streamlined curriculum. Also, they complained that setting a minimum of one hundred questions in a course alone is a herculean task especially if a lecturer has three general studies courses that are computer-based.
According to them, developing a good test needs time, skill, experience, commitment and adequate planning.
HOW TO INCREASE THE RELIABILITY OF COMPUTER-BASED LANGUAGE TESTS
Computer-based language tests in Delta State Polytechnic, ngashi-Uku are of the multiple choice type of objective test. Richards and Schmidt (2002: 346) define a multiple choice test as “a test item which the test taker is presented with a question along with four or five possible answers from which one must be selected". Usually, the first part of a multiple choice item will be a question or incomplete statement known as the stem. The different possible answers are known as the alternatives. The alternatives contain one correct answer and several wrong answers called distractors. For example: is the study of meaning in language. A) Syntax B) Phonology C) Semiotics D] Semantics.
In the above question, D. is the correct answer whereas A, B and C are distractors because they are intended to distract the attention of students who have little knowledge of the subject matter.
Reliability of the above type of test is of utmost importance to the language lecturers and the students as well. Language testing is content based or dependent because the tests are based on a prescribed curriculum or syllabus. The curriculum prescribes the content to be covered in the course of study. The purpose of testing in this case is to measure performance and language proficiency of students. According to Nunally (1982), reliability is concerned with the extent to which measurements are repeatable if all items being studied were included. In other words, reliability can be described as the extent to which a test measures what it purports to measure consistently and accurately. In the same vein, Maduekwe (2007] stated that test reliability refers to the idea that a good language test should give consistent results. A reliable English test, in her opinion, is one which should measure whatever it is supposed to measure consistently under all conditions. For example, if the teacher administers three tests in English language class say for a term and the students perform in a consistent manner on the tests, then the test items are reliable. The following suggestions will improve the reliability of computer-based language tests if adopted by English language teachers/lecturers and other test administrators who may want to use computers to test the proficiency of their students or learners generally.
First, the English Language teachers should start planning the test and writing down the test questions well ahead of the time the examination is to be taken by students. English language teachers can start writing down their multiple choice questions at the end of every topic in the syllabus. If this is done, questions generated will capture the essence and behavioural objectives of the topic in the students learning. If, however, this is not done, the teacher is likely to write the test items hurriedly at the last minute and this will make the test items have low reliability.
Secondly, the language teacher should pay more attention to the careful construction of the test items/questions. Each question should be phrased or written properly and clearly so that students know exactly what they are requested to do. He should also write items that discriminate among good and poor students and are of an appropriate difficult level. The questions should neither be too easy nor too difficult for the students.
Thirdly, the teacher should construct the test items using clear instructions and directions. Poorly worded or ambiguous questions or trick questions are another major threat to reliable measurement in computer-based language testing.
In addition, English language lecturers should bear in mind that students are expected to answer seventy (70] multiple choice objective questions in forty five (45) minutes. Therefore, the questions and options should be as clear and straight forward as possible to remove the chance of students' guessing the correct option when they do not know the answer to a particular question. Also, a question should not be so difficult that a student spends more than thirty seconds on it. If this is not done, there is a tendency for students not to finish answering the questions set in record time and so they will be logged out by the computer.
Finally, to curb impersonation during computer-based language examination, fingerprint identification should be introduced in addition to passport and pin number identification. The testing tool should be designed in such a way that a student keys in the pin umber and uses finger print to log in to write the examination. This will prevent an impostor from sitting for another student.
SUMMARY AND CONCLUSION
This paper set out to investigate how reliable computer-based English Language testing is in examining ND 1 and HND I level students in Delta State Polytechnic, Ngashi-Uku. Twenty lecturers in the Department of Languages were interviewed and several observations were highlighted. The advantages and disadvantages of this type of assessment were discussed and possible suggestions and recommendations on how to make computer-based English Language testing more reliable and successful in achieving the behavioural objectives of each topic in the course curriculum were proffered.
In conclusion, the English Language teacher who sets the questions used for the examination has the bulk of the work to do to ensure that test items fed into the computer system are well constructed taking into consideration the factors of reliability and validity.
REFERENCES
Allen, I. P. B. (1984) General Purpose Language Teaching: A Variable Focus Approach. In C.
I. Brumfit (Ed.), General English Syllabus Design (pp. 61-7 4) Oxford: Pergamon Press
Billing, D. E. (1973) "Objective Tests in Tertiary Science Courses" in D. E. Billing 8: B. E.
Furniss (Eds) Aims, Methods and Assessment in Advanced Science Education (pp. 131-
148) London: Heydon and Sons
Chapelle, C. (2001) Computer Applications in Second Language Acquisition: Foundation for
Teaching, Testing and Research. England: Cambridge University Press
Chiedu, R. E & Omenogor H. D. (2014) The Concept of Reliability in Language Testing: Issues
and Solutions in journal of Resourcefulness and Distinction Vol. 8 No. 1, Pp. 187-195
Farhady, H. & Keramati, M. N. (1996) A Text-driven Method for The Deletion Procedures in
Close Passages. Languages Testing, 13(2), 191-207
Folk, V. G. and Smith, R. L. (2002) Models for Delivery of CBTS in C. N. Mills, Petemza, M. T.,
Fremer, I. J, Ward, E. C. (Eds), Computer-based Testing: Building the Foundation for
Future Assessments, 41-46. Mahwah, NI: Lawrence Erlbam Associates Inc.
Ieang, H. (2014) A Comparative Study of Scores on Computer-based Tests and Paper Based
Tests. Behaviour and Information Technology, 33(4), 410-422
doi.org/10.1080/0144929x.2012.710647
Linden, W. I. and Glas G. A. W. (2002) Computer Adaptive Testing: Theory and Practice New
York: Kluwer Academic Publishers
Maduekwe, A. N. (2007) Principles and Practices of Teaching English as a Second Language;
Lagos: Vitaman Educational Books
Murayama, K. (2003) Test Format and Learning Strategy use. Japanese journal of
Educational Psychology, 51(1), 1-12
Nunally, J. C. (1982) Reliability of Measurement Encyclopaedia of Educational Research (4)
Pp 15-16
Parshall, C. G Spray, I. A., Kalohn, I. C. & Davey, T. (2002) Practical Considerations in
Computer-based Testing. New York: Springer
Richards, F. & Schmidt, N. (2003) Longman Dictionary of Applied Linguistics England:
Cambridge University Press
Shohamy, E. & Hornberger (2008) The Power of Tests, London: Longman
Wang, H. (2010) Comparability of Computerized Adaptive and Paper pencil ‘Tests
http://images.pearsonassessment.com/images/tmrs/tmrs_rg/bulletin_13.pdf
Wikipedia the free encyclopaedia: http://en.wikipedia.org Retrieved December, 2019
Computer gives us tableau online course
ReplyDeleteA good blog always comes-up with new and exciting information and while reading I have feel that this blog is really have all those quality that qualify a blog to be a one. hire a hacker
ReplyDeletehttps://www.buyyoutubesubscribers.in/2021/11/20/how-to-earn-money-from-youtube-views/ So, you want to make money on YouTube but have no idea how to go about it. If used wisely YouTube can be a great tool for your Internet Marketing business. There are several ways you can harness the power to YouTube. Some of the examples are: your site promotion, developing an e mail list, marketing as an affiliate etc. Like any marketing tool YouTube will have a learning curve.
ReplyDeletehttps://dynamichealthstaff.com/jobs-in-dubai-for-nurses One of the most commonly asked questions regarding search engine optimization is: "What is the best way to learn SEO"? The answer I always give to this question is simple yet complex. My answer to this question is always: "It Depends".
ReplyDeletehttps://hostinglelo.in Blog commenting is a good method to get back links to your website. But you must follow the correct procedures and select where to place your comment carefully. Writing meaningful comments formatted to give your message in shortened form is the main thing about blog comments.
ReplyDeletehttps://www.visualaidscentre.com/lasik-eye-surgery-in-delhi/ I'm sure you have heard of YouTube. It is very popular and top-ranker more than you could ever imagine. YouTube is the name that connects people to amusement and fun.
ReplyDeleteIt is what I was searching for is really informative. Paper Publication In Best Computer Science Journal It is a significant and useful article for us. Thankful to you for sharing an article like this.
ReplyDeleteI read this article, it is a really informative one. Your way of writing and making things clear is very impressive. pls visit our website testing and tagging sydney Thanking you for such an informative article.
ReplyDeleteI am appreciative of this blog's ability to provide information on such an important subject. I discovered other segments here, and I'm excited to put these new instructions to use. Book German Classes Online
ReplyDelete