Sign In
Not register? Register Now!
You are here: HomeResearch PaperIT & Computer Science
5 pages/≈2750 words
6 Sources
IT & Computer Science
Research Paper
English (U.S.)
MS Word
Total cost:
$ 39.95

Computer Science - Review of Algorithms (Research Paper Sample)

According to the guidelines for the paper, it was to be a literature review of six different publications that discussed computer algorithms. A technique for identifying SQL injection vulnerability using black-box testing was required to be the topic of discussion in the publications that were being evaluated. The references in the work needed to be formatted using the IEEE standard, and the proper in-text citations needed to be incorporated. The client had a version of the IEEE Format that they had used in the past for their papers, and that version was required to be followed in this one as well. source..
Review 2nd Given Name (Surname) Dept. Name of Organization (of Affiliation) Name of Organization (of Affiliation) City, Country Email Address or ORCID "A Black-box Testing Tool for Detecting SQL Injection Vulnerabilities" by Zoran Djuric The authors delve into the escalating issue of online application vulnerabilities, particularly the SQL injection (SQLI) vulnerability CITATION Dju13 \l 1033 [1]. The vulnerability allows the attacker to alter or remove data, potentially causing a system error. The author recommends a "black box analysis" approach and uses SQLI Vulnerability Detection Tool (SQLIVDT) to identify the vulnerability of the SGL Injection operations CITATION Dju13 \l 1033 [1]. In this technique, the author simulates SQL Injection attacks on web-based applications, restricting the scope of study to HTTP responses generated by the application server CITATION Dju13 \l 1033 [1]. Figure SEQ Figure \* ARABIC 1: The dynamic security analysis approach. The author uses a phased strategy to develop the solution that includes web crawling, AEP (application entry point) identification and retrieval, attacking, analysis, and report generating CITATION Dju13 \l 1033 [1]. Web crawlers are employed in phase 1 to scan the World Wide Web systematically. The next stage is to detect and extract AEP forms. Attacking is the third phase. The most typical attacks in this strategy include tautologies, illegal/logically incorrect inquiries, union queries, piggybacked queries, and inference attacks CITATION Dju13 \l 1033 [1]. The next phase is the analysis to detect any evidence of SQL Injection vulnerabilities. Figure SEQ Figure \* ARABIC 2: Analysis in Detection of SQL Injection. The procedure outlined below is used to watch for evidence of SQL injection. The last stage is report creation. The performance of the web application scanner was satisfactory although there is no guarantee in its performance when used in other applications. Also, the process was affected by the sluggish speed of the SQLIVDT. The SQLIVDT web scanner is sluggish since it employs two similarity detection and also performs two runs through all pages. On the plus side, the process proved more effective at spotting SQL Injection problems. The SQLIVDT detected a more significant number of vulnerabilities in the tested apps as a part of the studies. "An Analysis of Black-Box Web Application Vulnerability Scanners in SQLi Detection" by Shilpa Jose, K. Priyadarshini and K. Abirami The authors investigate web application vulnerabilities in the SQL injection process, enabling hackers to carry out harmful operations that result in significant losses for users. The SQL injection software enables a hacker to inject harmful code into the database query system or remove certain lines, which may harm the functioning of a web application. However, the authors argue that raising understanding of software flaws may reduce these risks. The authors propose a five-phase approach for detecting the SQL injection vulnerability: web crawling, AEP detection, attacking, analysis, and report generation CITATION Jos15 \l 1033 [2]. Figure SEQ Figure \* ARABIC 2: Procedure to detect SQL injection vulnerability Crawling is the first phase, enabling scanners to get a basic framework of the subject web application by exploring accessible hyperlinks. After that, the AEP is identified, and the SQL Injection attack code is deployed. The attack is the third phase. The algorithm mounts attacks and triggers vulnerabilities using attack codes produced by the algorithm. The operation is then examined for any found application vulnerabilities. Finally, the scanners provide reports explaining any vulnerabilities detected or demonstrating that the system is safe and secure. The performance of the web application vulnerability scanners was not the best. The scanners are slow, inaccurate in vulnerability detection, and have challenges with crawling. The approach has various drawbacks, such as some web application vulnerability scanners, which although having a security database management system, fail to detect known vulnerabilities in some occasions CITATION Jos15 \l 1033 [2]. On the bright side, the method was more successful in detecting SQL Injection flaws in the evaluated web apps. “Automated Black Box Detection of HTTP GET Request-based Access Control Vulnerabilities in Web Applications” by Kushnir et al. The authors provide a technique that will make automated black-box detection of access control issues in HTTP GET requests much more straightforward. The approach can be used on various web apps and can detect multiple vulnerabilities while reducing the number of misclassifications in vulnerabilities. It is based on the notion that web apps offered to users exclusively include hyperlinks, buttons, and other navigation elements that users may lawfully utilize CITATION Kus21 \l 1033 [3]. The method starts with understanding the subject web application and capturing of authenticated session identifiers used in the applicable user profiles. The procedure of access control testing is shown in Figure 3 below. Figure SEQ Figure \* ARABIC 3: Access Control Testing The crawlers go through the intended app multiple times. One is done for every user whose information is supplied, while the other is performed without the data. The crawling element produces various lists of HTTP requests and response combinations. Filtering employs a set of 5 filters to exclude HTTP requests and answers that aren't significant. Figure SEQ Figure \* ARABIC 4: Filtering process The replay testing is the last phase. The replay testing result is a collection of queries that correlate to access control flaws. The accuracy in performance was incredible for this approach. The approach missed only one vulnerability and reported a negligible number of false positives CITATION Kus21 \l 1033 [3]. However, its evaluation scope is restricted, whereas, in other online applications, the greater range is necessary. Furthermore, the assumptions that web pages provided to users only include navigation elements to lawfully accessible material is incorrect, implying that specific vulnerabilities may be ignored throughout this process. "Verifying Sanitizer Correctness through Black-Box Learning: A Symbolic Finite Transducer Approach" by Sophie Lathouwers, Maarten Everts, and Marieke Huisman. According to the authors, and SFT developer can use the black-box approach to generate models of the sanitizer's implementations against which the specifications can be contrasted. To uncover the vulnerabilities, the author's approach begins with gathering necessary background information, followed by applying the algorithm CITATION Lat20 \l 1033 [4]. The user must create an SFA that supports the blacklist's stated inadmissible inputs and outputs. To confirm the specifications, the authors calculate the combination of the given SFA and the SFA that matches the input or output languages to confirm the specifications. This method is shown below. Figure SEQ Figure \* ARABIC 1: An SFA that accepts all inputs in the algorithm The data concerning the string analysis approach to be utilized in evaluating the SFTs are examined in the preliminary stages. After that, the algorithm is employed. Varying sanitizer algorithms may react to blocking an input differently due to this technique. Consequently, if a black-box method is used, the user will have no idea how the software reacts to an unsatisfactory input CITATION Lat20 \l 1033 [4]. For all transitions to advance to the next state in the automata, this method creates guards, also known as input parameters. The performance of the approach is satisfactory but not perfect. The accuracy of error detection lags; for instance in the study, there were 2 cases whose error identification was inaccurate. All sanitizers, particularly those whose action is dependent on numerous characters, cannot be correctly represented by the suggested SFT learning method. This implies that if this method is used to make decisions about sanitizers whose functionality is determined by many characters, the conclusions will be incorrect. The process's benefits include utilizing the technique for reasoning about the appropriateness of sanitizers to discover flaws in sanitizer applications successfully. “BLACK BOX EVALUATION OF WEB APPLICATION SCANNERS: STANDARDS MAPPING APPROACH" by Malik Qasaimeh and Tariq Z Khairallah In CITATION QAS22 \l 1033 [5] the authors employ the black box technique to assess the functionality of web application scanners. During the development and implementation stages, the article digs into the evaluation of web application scanners, which assist developers in identifying current flaws that might jeopardize the confidentiality of data sent between the clients and web servers CITATION QAS22 \l 1033 [5]. According to the authors, the procedure was prompted by the rising ambiguity around web application scanners because some provide inaccurate findings. The authors look into five WAVSs (Acunetix WVS, Burp Suite, NetSparker, Nessus and OWASP ZAP) that might help them find web application vulnerabilities CITATION QAS22 \l 1033 [5]. This research aims to assess the scanner's ability to find genuine vulnerabilities in web apps using a variety of insecure web applications. During the drafting of this paper, the WAVSs used in this research were the finest. The scanners are used to scan each web app to find any potential vulnerabilities. The scanners are run in default profile configuration to generate typical outcomes, without any modification or tweaking being supplied to the scanners to function as a reference after the vulnerability tests were performed CITATION QAS22 \l 1...
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

Other Topics:

  • On Active Estimation of End-to-End Capacity: A Comparison
    Description: The end-to-end capacity, defined as the maximal transmission rate of the weakest link on the entire path between two end hosts, can be useful in various applications. Popular examples include network tomography for tracking and visualizing Internet topology, capacity planning support and optimized route ...
    6 pages/≈1650 words| 21 Sources | Other | IT & Computer Science | Research Paper |
  • The Round-Trip Time
    Description: In addition to parameters such as throughput and availability for Internet quality, the response time, i.e. the round-trip time (RTT), is a crucial factor. The RTT indicates the time required to send a packet of data from a source to the receiver over a network and to transport the receiver’s response back...
    15 pages/≈4125 words| 14 Sources | Other | IT & Computer Science | Research Paper |
  • Capacity Estimation, Ensemble Estimation, Testbed, and Network Measurement
    Description: The end-to-end capacity, defined as the maximal transmission rate of the weakest link on the entire path between two end hosts, plays an important role in efficient network design and management. Although various capacity estimation tools have been proposed in the literature, there is still uncertainty in their...
    21 pages/≈5775 words| 34 Sources | Other | IT & Computer Science | Research Paper |
Need a Custom Essay Written?
First time 15% Discount!