Sign In
Not register? Register Now!
You are here: HomeCourseworkTechnology
Pages:
5 pages/≈1375 words
Sources:
Level:
Harvard
Subject:
Technology
Type:
Coursework
Language:
English (U.K.)
Document:
MS Word
Date:
Total cost:
$ 36.45
Topic:

E-examination System/Online Examination System (Coursework Sample)

Instructions:

The School of Computing, Engineering and Physical Sciences want to start using an on-line system for all student examinations, either by buying in a system or building a new one. The aim is to reduce paper-handling and the marking overhead. The intention is to run exams on site at the University but use an electronic system instead of paper.Consider what functionality and facilities the system should provide.

source..
Content:

Individual Report
E-examination System/Online Examination System
Student Name
University/College
Date of Submission
Title: Online Examination System for CEPS
1.1 Requirements Gathering Process
I conducted the requirements gathering process via questionnaires. The design, evaluation and administration of survey questionnaires constituted an elaborate process that required significant input of resources. Thus so as to reduce measurement errors, we sought to be well versed with the design and evaluation process of survey questionnaires. Secondly, i endeavored to understand diverse sources of measurement error that could possibly jeopardize data quality and credibility. The questionnaire design and evaluation is a basic survey process model. I made use of the process model that encompasses eight overlapping phases.
In phase 1, i used observation which is the most fundamental tenet of research. During this phase, i observed the activities, behaviors and events within multifaceted contexts. I employed the bottom up questionnaire processing approach. I took this approach bearing in mind that what both respondents and interviewers receive through observation is important, given that a significant disparity between the survey participants and content specialist’s knowledge of the subject matter could render the entire process of evaluating questionnaires difficult. Consequently, i sought to determine what my target populations (CEPS students) know about the subject matter (e-examination systems). To achieve this objective, i drafted interview questions that would establish the respondents’ knowledge of e-examination systems, how they acquired this knowledge and also how accurate and reliable their knowledge of the subject matter was.
In the second phase, I sought to determine the observation based knowledge of the subject matter expert team members and in particular I interviewed the questionnaire content specialists’ team members. Moreover, I also interviewed individuals who share similar characteristics with the probable survey participants. This approach emphasizes expert observation of the correspondent’s behavior and attributes. In phase 3, also referred to as conceptualization phase, I studied my domain of interest which was online examination systems and I organized it into concepts-based categories (Beatty, 1995). I assumed the primary responsibility for observation and conceptualization and the execution of other tasks related to conceptualization of the subject matter. During phase 4 also referred to as the assessment phase, I sought for an independent professional observer to check and correct my conceptual knowledge and assumptions in order to minimize errors. During Phase 5, also known as the operationalization phase, I assumed the task of translating survey concepts into questionnaire items (Tourangeau, 1984). In Phase 6, also known as OP assessment or questionnaire pretesting phase, I collaborated with my other team members who acted as content specialists to draft a plan for testing the draft questionnaire to ascertain its operational functionality. I begun the testing with an assessment of how various research participants ‘process’ questionnaire cognitively through understanding, deduction and opinion (Willis, 1991). Nonetheless, there also exist other methods of testing questionnaires.
Phase 7, also known as the administration phase comes after the conclusion of the pretesting phase. During this phase, I iterated between phases1 to 6 to determine the necessity for modifications. Hence after the completion of questionnaire and metadata modifications, I finalized the questionnaire in readiness for the production environment. Finally, the questionnaire entered phase eight, also referred to as the quality assessment phase. During this phase, I conducted a post-implementation quality assessment test periodically to establish whether or not the questionnaire was effectively capturing and measuring the concepts that our group had specified (Udman & Bradburn, 1982).
1.2 Critical evaluation of the requirements gathering approach
The approach that we used incorporates inter-depended sources of measuring error occurrence during the questionnaire administration phase. Thus Latek defines measurement error as the disparities that occur between respondents’ attributes when matched with the survey responses (Latek, 1985). He further differentiates four different sources of measurement error which he identified as the interviewer, respondent, the questionnaire and the data collection method (1985, pp.163-166). Thus in describing measurement that emanate from our questionnaire, it’s fundamental that a clear distinction be made between the contribution of our group members acting as content specialists, and those acting as design specialists. This distinction is critical given that each group was assigned a different role in the questionnaire design and evaluation process and also considering that each group posses specialized skills for handling issues and problems that could be either technical or conceptual (Schwarz, 1996). Hence evaluated from a functional standpoint, content and design specialist members constituted an integrated working team which we also called the questionnaire design and evaluation team (Latek, 1985). On the other hand, the team members that included the interviewer, the respondent and the data collection methodologies were referred to as the data collection team.
The content specialists played a pivotal role in the observation and conceptualization phase where their primary responsibility was to design the questionnaire and define the subject domain (Akkerboom & Dehue, 1997). They also defined key subject concepts and categories while also differentiating theoretical variables. The role of design specialists of which I was a member was to transform conceptual specifications submitted by content specialists and come up with a comprehensible and clear questionnaire items. This was a challenging endeavor given that the conceptual specifications were unclear and needed to be clearly defined. I acted as the lead interviewer and my primary mandate was to administer the survey questionnaire in a standardized fashion while significantly minimizing measurement error (Willis, 1991). As the interviewer, I had to take in insurmountable blame from both content and design specialists as the primary source of measurement error. However, some of the blame was misplaced given that some sources of measurement error had emanated from the initial questionnaire design and evaluation work. In order to improve questionnaire quality, it’s imperative that cognitive errors that may occur in each phase of administering the questionnaire be checked (Belson, 1981). We employed various strategies for identifying questionnaire related anomalies, thereby significantly reducing measurement error. Thus respondents must be motivated to fully take part in the survey by engaging them in fulfilling behavior that would consequently diminish their contribution to the overall measurement error. As one of the interviewers, I engaged the respondents in a more gratifying one-on-one interaction that motivated them to give credible responses thereby greatly reducing the measurement error. In addition, data collection method has a significant effect on the degree of the measurement error (Akkerboom & Dehue, 1997).
Design and evaluation of the questionnaire was potentially iterative and bidirectional as various phases had to be reviewed and necessary modifications effected. I noted that measurement error seemed to be coming from the various roles and tasks that we expedited as a team, and particularly from the survey administrative phase. We noted that measurement error could be significantly reduced if we as the design and evaluation team had a better definition of the tasks related to the observation and conceptualization phase (Akkerboom & Dehue, 1997). This could have been achieved if the survey sponsors, notably CEPS would have provided a comprehensive documentation of the proposed e-examination system. Besides, measurement error could have been drastically reduced if design specialists in our team would have worked in collaboration with the subject matter experts to submit the necessary requirements for effectual evaluation of the work (Schwarz, 1995). Submission of documentation for the envisaged online examination system by the CEPS would have empowered the design team members to effectively interpret survey concepts so as to derive relevant questionnaire items. Therefore given that we did not receive documentation for the proposed online examination system, our design team was ill prepared to handle the development and evaluation process (Willis, Royston & Bercini, 1991).
1.3 Discussion of alternative approaches
Despite its widespread usage in capturing research data, questionnaires have limitations that make them inappropriate for use in certain research scenarios. There exist two other approaches of data collection that we could have opted for. These are personal and phone interviews. Personal interview would have been the most viable alternative approach given its numerous benefits. The benefits of using personal interview include richness of response, allows room to elucidate misconceptions, permits interviewers to follow up responses and finally, measurement error is significantly reduced (Schwarz & Sudman, 1996). When this alternative approach is followed, better data is captured as compared to the use of questionnaires. It has also been established that respondents are more conscientious during personal interviews due to the presence of the interviewer, unlike questionnaires. Moreover, there is a higher response rate when personal interview approach is used compared to the questionnaire.
A...
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

Other Topics:

  • Financial Engineering Option Valuation Report
    Description: Financial Engineering Option Valuation Report Technology Coursework...
    14 pages/≈3850 words| Harvard | Technology | Coursework |
  • Effect of Technology on HRM
    Description: With more eyes on HR trend technology than ever before, there are rapid changes that are taking place...
    5 pages/≈1375 words| Harvard | Technology | Coursework |
  • Aircraft Electrical Systems Engine Performance
    Description: The effect of increasing the combustion chamber exit velocities by increasing the strength of the turbine inlet guide vanes...
    5 pages/≈1375 words| Harvard | Technology | Coursework |
Need a Custom Essay Written?
First time 15% Discount!