Sign In
Not register? Register Now!
You are here: HomeDissertationManagement
29 pages/≈7975 words
30 Sources
English (U.S.)
MS Word
Total cost:
$ 39.95

Perceived Integrity of the Decision-Maker for Automated and Augmented Systems (Dissertation Sample)


writing on algorithmc Management with the use of SPSS SOFTWARE


Algorithmic Management Student’s NameInstitutional AffiliationInstructor’s NameCourse Name and CodeDue DateAbstractReliance on algorithmic systems to inform decision in various aspects of life has permeated our day to day living. This study sought to examine the interaction effects of room for negotiation on the perceived integrity of the decision-maker in algorithmic systems. The study was guided by the following assumptions:H1 Automated decision-makers are perceived as fewer integers as compared to augmented decision-makersH2 There is difference in perceived integrity of decision-making processes for automated and augmented systems when moderated by the ability to object. H3 Automated decision-makers are perceived as fewer integers than augmented decision-makers when the employee has the ability to object to the decision made.To achieve its objective, the study relied on quantitative research methodology in which a deduction research design was adopted. Specifically, a quantitative vignette study approach was applied to identify relationships between perceived integrity and automated or augmented decision-making and to determine if agency of the employee in- fluences this relationship. The data obtained from the survey was organized, cleaned and entered into SPSS for analysis. To test hypothesis 1, the relationship between automated decision-making and perceived integrity binary logistic regression was conducted. The decision-maker was the independent variable while perceived integrity was the dependent variable. For hypothesis 2, the moderation (interaction) effect between automated decision-making and room for negotiation on perceived integrity was measured by ordinal regression since it was established that the data was not normally distributed. A multinomial regression analysis was conducted to measure differences between groups. Results from the analysis suggested that for every unit increase in perceived integrity, there is a 0.858 chance that automated decision makers are perceived as less integer as compared to augmented decision-makers. The odd of the ability to object to predict perceived integrity in automated decision-making where algorithm takes decision autonomously was estimated to β 0.846. The subject’s ability to object in augmented decision-making-algorithms did not register any significant results in influencing perceived integrity. β 0.95 indicated that the odd of automated decision makers being considered less integer than augmented decision makers in increased by 95% for each subset where the respondent did not have the ability to object, where the decision maker had the ability to object, the difference between automated and augmented decision-makers were considered insignificant. These findings led to the conclusion that augmented systems are perceived to have higher levels of integrity as compared to automated systems. This perception is even stronger where the subject has the ability to object.
Table of Contents TOC \o "1-3" \h \z \u Chapter 1 PAGEREF _Toc75972878 \h 4Introduction PAGEREF _Toc75972879 \h 4Objectives of the Study PAGEREF _Toc75972880 \h 6Rationale of the Study PAGEREF _Toc75972881 \h 6Chapter 2 PAGEREF _Toc75972882 \h 7Literature Review PAGEREF _Toc75972883 \h 7What are algorithms? PAGEREF _Toc75972884 \h 7Algorithmic decision-making PAGEREF _Toc75972885 \h 8Perceived integrity PAGEREF _Toc75972886 \h 9Room for negotiation PAGEREF _Toc75972887 \h 12Chapter 3 PAGEREF _Toc75972888 \h 13Methods PAGEREF _Toc75972889 \h 13Introduction PAGEREF _Toc75972890 \h 13Study Design PAGEREF _Toc75972891 \h 13Sampling PAGEREF _Toc75972892 \h 14Sample Size PAGEREF _Toc75972893 \h 14Research Instruments PAGEREF _Toc75972894 \h 14Validity and Reliability PAGEREF _Toc75972895 \h 15Control variables PAGEREF _Toc75972896 \h 15Procedure PAGEREF _Toc75972897 \h 15Analytical plan PAGEREF _Toc75972898 \h 16Chapter 4 PAGEREF _Toc75972899 \h 17Results PAGEREF _Toc75972900 \h 17Demographic characteristics of the Sampled Population PAGEREF _Toc75972901 \h 17Normality Tests PAGEREF _Toc75972902 \h 17H1 PAGEREF _Toc75972903 \h 18H2 PAGEREF _Toc75972904 \h 18H3 PAGEREF _Toc75972905 \h 19Chapter 5 PAGEREF _Toc75972906 \h 20Discussion PAGEREF _Toc75972907 \h 20Perceived Integrity within the Context of Automated vs. Augmented Decision makers PAGEREF _Toc75972908 \h 20Room for Negotiation PAGEREF _Toc75972909 \h 22Conclusion PAGEREF _Toc75972910 \h 26References PAGEREF _Toc75972911 \h 28Appendices PAGEREF _Toc75972912 \h 34Appendix 1: Demographic Characteristics of the Sample Population PAGEREF _Toc75972913 \h 34Appendix 2: Test of Normality PAGEREF _Toc75972914 \h 37Appendix 3: Test of Model Fitness PAGEREF _Toc75972915 \h 37Appendix 5: Binary Results PAGEREF _Toc75972916 \h 37Appendix 6: Model Fitness Results PAGEREF _Toc75972917 \h 38Appendix 7: Model Results PAGEREF _Toc75972918 \h 38Appendix 8 Test of Fit PAGEREF _Toc75972919 \h 38Appendix 9: Results PAGEREF _Toc75972920 \h 39
Chapter 1
In many decision domains, algorithms surpass human capabilities. For example, algorithmic systems make few modelling errors than human problem solvers; in more uncertain domain, facilitated algorithmic systems have proven to be more effective than human experts. The observed efficiency of algorithmic systems have ushered their ubiquitous use in many work environments. At the helm of algorithm application in decision making process are two types of systems, fully automated decision processes and augmented decision process. Automation refers to the uncontrolled system decision making based on selected data, and information without human interference. Augmented algorithmic systems combine the analytical capabilities of algorithms with human support in decision making (Langer & Lander 2021). With good design and frequent updates, Raich & Krakowski, (2021) argues that decision automation and augmentation has been shown to offer more efficient decision than the most experienced human experts. Nevertheless, a poor system design, flaws in the algorithms and information misuse has eroded the integrity associated with algorithm systems.
System integrity as a measure of perceived attitudes towards algorithmic system refers to the quality that system has when performing at optimal function. For a system to be integer, it must demonstrate impartiality, be free from unauthorized manipulation. At optimal function, attitudes towards the integrity of an algorithm based system is known to vary based on the nature of the system (Srinivasan & Sarial-Abi, 2021; Chaffey & Patron, 2012). For example, research by Lee (2018) showed that algorithms are perceived as helpful tools, however, there was variance in perceived integrity when the decision was purely automated compared to when human input was introduced. Specifically, even though decisions made by an algorithm outperform human-decision makers on average, humans are hesitant to use the algorithm in fields in which the decision domains are less predictable like in the field of medicine (Dietvorst & Bharti, 2020). Large concerns arise about the risks and fairness of algorithmic decision- making especially where the system is purely automated (Araujo, Helberger, Kruikemeier, & de Vreese, 2020). In automated systems, algorithms have the potential to be more objective but it has been criticized to enhance unfavorable biases such as discrimination, power asymmetry and opacity (Lepri, Oliver, Letouz´e, Pentland, & Vinck, 2017).
The perceived integrity of a system is even more complex to rate when the ability to negotiate the choice is introduced. Here the level of trust becomes an apparent predictor of perceived integrity. Current research shows that attitudes of trust are important mediating factor in determining how people interact in social settings. Trust has equally been implicated in assessing the attitudes towards automated systems. While trust guides perception towards integrity, it does not determine reliance and efficiency of such systems. Thus, the ability to negotiate the decision (object or not to object) moderate how an individual’s level of trust and assessment of fairness will rate the integrity of the system. Contemporary literature is awash with measure of integrity in automated and augmented algorithmic systems, however, the extend to which the subject’s choice to object decisions influence their perceived integrity on the system. This study closes this gap by examining how perceive integrity varies in automated and augmented systems where the subject has the ability to object the decision outcome.
Objectives of the Study
The goal of this thesis is to examine the interaction effects of room for negotiation on the perceived integrity of the decision-maker in algorithmic systems.. The decision-maker can be supported by an algorithm (augmented), but the decision-maker can also be automated (algorithm). To achieve these objectives, three hypotheses are proposed:
H1. Automated decision-makers are perceived as less integer as compared to augmented decision-makers
H2. There is difference in perceived integrity of decision-making processes for automated and augmented systems when moderated by the ability to object.
H3. Automated decision-makers are perceived as fewer integers than augmented decision-makers when the employee has the ability to object to the decision made.
Rationale of the Study
For practice it is important to gather insights in how augmented and auto- mated decision-making is viewed by employees, since the perception of algorithms can significantly influence the adoption of algorithmic decision-making (M. Lee, 2018).This adoption is needed in order to increase the acceptance of employees towards the use of algorithmic man...

Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

Other Topics:

Need a Custom Essay Written?
First time 15% Discount!