Graduate School of Natural and Applied Sciences
Permanent URI for this community
Browse
Browsing Graduate School of Natural and Applied Sciences by Subject "computer engineering"
Now showing 1 - 20 of 49
Results Per Page
Sort Options
Item A COMPARATIVE STUDY OF COMMON CYBER SECURITY POLICIES FOR DIFFERENT ENTERPRISES(2022-02-28) SAMEER, MAHA; MISHRA, ALOKCCyber security represents the essential issue and takes the top priority not only for enterprises of all sizes but also national security. Nowadays, many enterprises invest heavily in cyber security to protect their cyber environments and information and communication technology. Therefore, several enterprises embrace security policies as one of security defense solutions to safeguard from any attacks before the damage is done and negative effect occurs on the business of the enterprise. In this study, significant and common cyber security policies of different enterprises are compared and discussed. These enterprises include health sector, financial sector, educational sector, aviation sector, and e-commerce. The purpose of this study is to build robust and inclusive cyber security in each company and enterprise. The result of this study indicates that there are ten important common security policies should be applied in each enterprise and organization. These policies include privacy policy, data protection policy, data retention policy, information security policy, E-mail security policy, physical security policy, website security policy, cloud security policy, network security policy, and access control policy. Additionally, results of this study show some cyber security policies to be more critical and important from enterprise to enterprise. That difference in priority of these security policıes depend on the nature of information under control enterprises and security needs of enterprises to these security policies.Item A COMPARATIVE STUDY OF NEURAL NETWORK APPROACHES IN NETWORK ANOMALY DETECTION(2022-02-15) Öney, Mehmet Uğur; PEKER, SerhatNetwork intrusion detection is an important research field, and artificial neural net works have become increasingly popular in this subject. Despite this, the research concerning comparison of artificial neural network architectures in the network in trusion detection is a relatively insufficient. To make up for this, this study aims to examine the neural network architectures in network intrusion detection to determine which architecture produces high accuracy and low false positive rate, and what are the effects of the architectural components such as optimization functions, activation functions, the momentum of the learning rate. For this purpose, we have generated 6480 neural networks and, we evaluated them KDD99 dataset and, near-real-time simulation environment. This thesis provides a roadmap to guide future research on network intrusion detection using artificial neural networks.Item A COMPARISON OF IMAGE DETECTION ALGORITHMS YOLO AND FASTER R-CNN IN DIFFERENT CONDITIONS(2022-06-13) ABDULGHANI, ABDULGHANI MAWLOOD A.GHANI; Dalveren, Gonca Gökçe MenekşeIn this thesis, we compare YOLOv4 with YOLOv3 and Faster R-CNN in terms of better object detection in both challenging weather conditions and darkness. Moving objects such pedestrians, cars, buses and motorcycles can be difficult to detect in rainy, foggy and snowy weather conditions or even at night. This study is aimed at evaluating the three modules to determine which perform best in such circumstances, bearing in mind that none of them was initially intended to perform in bad weather conditions or at night. This Study is done by utilizing Tesla P4 GPU, with 12GB RAM. We trained these algorithms with an Open-Image dataset, where YOLOv4 has scored the best results at 40,000 iterations, 72 mAP, and 0.63 Recall. On the other hand, YOLOv3 has scored maximum at 36000 iterations, 65.53 mAP, and 0.54 Recall. Finally, Faster R-CNN scored 36,000 iterations, 51 mAP, and 0.49 Recall. In terms of detection performance evaluation, YOLOv4 performed at 42 FPS, while YOLOv3 was at 37 FPS and Faster R-CNN at 10 FPS in video with 30 FPS. Based on the results, YOLOv4 has performed the best in comparison to YOLOv3 and Faster R-CNN.Item A DATABASE DESIGN METHODOLOGY FOR COMPLEX SYSTEMS(2013-07-14) TOPALLI, Damla; ÇAĞILTAY, NergizThe quality of the software is directly related to addressing the users' needs and their level of satisfaction. To reflect user requirements to the software processes, correct design of the database model provides a critical stage during software development. Database design is a fundamental tool for modeling all the requirements related to users' data. The possible faulty conditions in database design have adverse effects on all of the software development processes. The possible faulty conditions can also cause continuous changes in the software and the desired functionality of the targeted system which may result in user dissatisfaction. In this context, reflecting the user requirements accurately in the database model and understanding of the database model correctly by every stakeholder involved in the software development process is the factor that directly affects the success of the software systems. In this study, a two-stage conceptual data modeling approach is proposed to reduce the level of complexity, to improve the understandability of database models and to improve the quality of the software. This study first describes the proposed two-stage conceptual data modeling. Then the proposed method’s impact on software engineers’ comprehension is also investigated and the results are examined. Results of this study show that, the proposed two-stage conceptual modeling approach improves the understanding level of software engineers and eliminates possible defects in this stage.Item A GENERALIZTION OF ARNOLD'S CAT MAP AND FRACTION BASED EMBEDDING IN IMAGE STEGANOGRAPHY(2022-02-15) Buker, Mohamed; Tora, Hakan; Gökçay, ErhanThe rapid development of data communication, and the increased amount of information that are communicated via networks, make it very important to find new ways to protect exchanged information. Encryption is one of the most widely used methods nowadays in this area. Steganography is a recent field of research in which the communicated information is being invisible to anyone rather than being only encrypted. The idea behind steganography is to hide the existence of information itself. As long as a third party knew there were information, whether encrypted or not encrypted, the information will be at risk. In this thesis, we present a steganographic model with two levels of security. First, the secret image is scrambled using our Generalized Arnold Cat Map (ACM). Then, the scrambled image is embedded into another image using our Fraction Based Embedding Technique (FBE) in the transform domain using both Discrete Wavelet Transform (DWT) and Lifted Wavelet Transform (LWT). The efficiency of our model was tested on benchmark color images. Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), Structural Similarity (SSIM) and correlation values are calculated. Results show that our Generalized ACM is more robust compared to standard and modified versions of ACM. At the same time, results of our new FBE technique performs better than those of other techniques regarding to PSNR and MSE values.Item A GENERIC ONTOLOGY CREATION TOOL: A CASE STUDY ON(2022-02-28) Yılmaz, Ekrem Çağlar; Turhan, Çiğdem; Güray, CenkTo retrieve any information from the Web, a search has to be performed on billions of documents which are unorganized, unstructured and unreadable by machines. To overcome this problem, the data on the Web has to be formalized in a machine readable format. One of the solutions is to use the Semantic Web technology which provides structure and meaning to data on the Web. In order to provide machine readable and semantically identified information, the Semantic Web technology utilizes ontologies which include resources, properties and their relations to identify metadata about data. The current ontology editors require expertise to create, organize, edit and manage ontologies. In this study, a generic ontology creation tool is developed for users with no expertise in ontology creation. The tool which can be easily and effectively used at every level of a business gathers information about the ontology from a non-expert providing step by step guidance with user interfaces.The aim is to enable any employee of a firm to create an ontology in their domain to be able to share information in machine-readable form with the rest of his/her company or other companies. The tool is tested on users who have different working experiences in terms of years and sectors of business. The results are evaluated with statistical methods which show that on the average, the users are satisfied with the tool and are able to create ontologies in their own domains.Item A GENERIC ONTOLOGY CREATION TOOL: A CASE STUDY ON BUSINESS SECTORS(2022-03-01) Yılmaz, Ekrem Çağlar; Turhan, Çiğdem; Güray, CenkTo retrieve any information from the Web, a search has to be performed on billions of documents which are unorganized, unstructured and unreadable by machines. To overcome this problem, the data on the Web has to be formalized in a machine readable format. One of the solutions is to use the Semantic Web technology which provides structure and meaning to data on the Web. In order to provide machine readable and semantically identified information, the Semantic Web technology utilizes ontologies which include resources, properties and their relations to identify metadata about data. The current ontology editors require expertise to create, organize, edit and manage ontologies. In this study, a generic ontology creation tool is developed for users with no expertise in ontology creation. The tool which can be easily and effectively used at every level of a business gathers information about the ontology from a non-expert providing step by step guidance with user interfaces.The aim is to enable any employee of a firm to create an ontology in their domain to be able to share information in machine-readable form with the rest of his/her company or other companies. The tool is tested on users who have different working experiences in terms of years and sectors of business. The results are evaluated with statistical methods which show that on the average, the users are satisfied with the tool and are able to create ontologies in their own domains.Item A METHODOLOGICAL APPROACH FOR SERIOUS GAME SOFTWARE DEVELOPMENT: AN APPLICATION FOR LANGUAGE DISORDERS(2012-01-25) ÇAĞATAY, Mehmet; EGE, Pınar; ÇAĞILTAY, NergizThe computer software has been actively used in education area in different ways today. However, for several reasons educational institutions are failing to integrate this software to current educational environments. Educational institutions have been criticized for using technologies similar to the ones used hundred years ago. We believe that, one of the reasons for this failure, integration of educational software technologies into current educational environments, is the complexity of these systems. Hence developing efficient software by addressing the real life problems is a complex process. There are various software development methodologies especially for complex software, with regular and planned development processes. So far, these software development methodologies are appropriate for almost all software, though, in terms of unique needs and developmental process of educational software, they may be inadequate. In other words, development of the educational software process requires some other considerations, such as the domain experts that are not considered during the development process of commercial software projects. In this thesis, a new educational software development methodology by involving the domain experts and their interactions with the end users is recommended. Additionally, this software development methodology is used in a serious game development process that supports the therapy process of children with impaired speech and language. Primarily in this study, the contribution of the serious game on the current therapy sessions is evaluated which is developed by using the proposed educational software development methodology. It is aimed to better address the problems of current therapy sessions by developing the software according to the new methodological approach. In other words, this study is a case study to show how the proposed methodology is applied on the development process of a serious game as well as its impact on current therapy sessions.Item A NEW METHOD FOR SOFTWARE DEFECT PREDICTION BASED ON OPTIMIZED MACHINE LEARNING TECHNIQUES(2022-03-01) HASSEN, SHAHO ISMAEL HASSEN; YAZICI, Ali; MISHRA, AlokIn this thesis a novel and robust heuristic driven neuro-computing model was developed for software defect prediction. Unlike other classical machine learning models, neuro-computing, especially Levenberg Marquardt Neural Network (LM ANN), is considered to be more robust in terms of adaptive learning, which can be vital towards non-linear feature learning and hence defect data. However, similar to the other machine learning models, the likelihood of local minima and convergence could not be avoided due to exceedingly high weight estimation for 17 input features. Considering this fact, this research contributed a novel improved genetic algorithm, say heuristic model was developed to assist ANN for adaptive weight estimation and update during learning. Here, the key purpose of heuristic model was to help LM-ANN gaining superior weight estimation, update and hence learning without undergoing any local minima and convergence problem. This as a result helped the proposed neuro computing model to achieve higher accuracy than the classical neural network over targeted software fault datasets. In addition to the classifier or machine learning improvement, in this research the focus was made on feature engineering as well that helped alleviating any probability of class imbalance, over-fitting and convergence.Item ABSTRACTIVE TEXT SUMMARIZATION USING DEEP LEARNING(2022-01-11) ABBAS, HANAN WAHHAB; YILDIZ, BeytullahThe ability to produce summaries automatically helps to improve knowledge dissemination and retention, as well as efficiency in a variety of fields.There are basically two approaches to summarizing, abstractive and extractive. The abstractive approach is considered more successful as it is the process of creating a brief summary of the source text to capture the main ideas. In this approach, summaries created from the source text may contain new phrases and sentences not included in the original text. The use of attention-based Recurrent Neural Networks encoder-decoder models has been popular for a variety of language-related tasks, including summarization and machine translation. Recently, in the field of machine translation, the Transformer model has proven to be superior to the Recurrent Neural Networks-based model. In this thesis, we propose an improved encoder-decoder Transformer model for text summarization. As a baseline model, we used Long Short-Term Memory with attention, a Recurrent Neural Networks model, for the abstractive text summarization task. Evaluation of this study is performed automatically using the ROUGE score. Experimental results show that the Transformer model provides a better summary and a higher ROUGE score.Item AN ADAPTIVE EDUCATIONAL MODEL FOR FLIPPED CLASSROOM(2017-03-07) Ahmed, Aisha Abdulaali Abdulla; ERYILMAZ, MELTEMThis study aimed to develop a flipped classroom demonstrate utilizing versatile innovations for primary school understudies and identify the individual contrasts among the third-grade basic understudies in the English Language in Libya by a versatile method in flipped learning, and customary training at three levels recalling, comprehension and applying of Bloom's Taxonomy independently. This study attempted to answer the following question: Are there any differences between traditional style of education, flipped learning , and adaptive technique in flipped learning in achievement tests according to Bloom's Taxonomy ( Remembering, Understanding, and Applying) for understudies in the third grade of essential in English ? To accomplish the targets of the review and answer its question, three tests were constructed and afterward ensure its earnestness and steadiness in proper ways, and selected the study sample and divided it randomly into three gatherings:- 1. The experimental group (1) educated by adaptive technique in flipped learning. 2. The control group (1) instructed through customary training. 3. The control group (2) (The trial bunch (2)) instructed by flipped learning. The review presumed that in the pre-test the gatherings were homogeneous; however, in midterm test and post- test, there were contrasts measurably huge for the experimental assembly (1).Item AN EVALUATION OF THE USE OF SOFTWARE ENGINEERING PRACTICES BY COGNITIVE MODELLING RESEARCHERS(2022-02-17) Kurtaran, Furkan; Say, BilgeAs an instance of scientific software, cognitive modelling is used to reveal how brains work in different levels of abstraction. Although there have been studies of software engineering practices in other domains of scientific modelling, cognitive modelling has not been inspected from a software engineering point of view. An international online survey with cognitive modelling researchers has been carried to pinpoint relevant points as well as to see whether there were any self-stated differences between developers and modellers; or between high level cognitive modellers and computational neuroscientists in their software engineering practices. It has been found out that researchers in cognitive modelling, as in other scientific software domains, find frequent changes in teams to be problematic, specifying requirements to be hard, acknowledge the need for documentation and want to improve their software engineering practices. Participants find software engineering practices relevant, but their familiarity and level of use is lower, with the exception of version control and change management deemed both relevant and practiced. There are no significant differences between developers and modellers except for the observation that modellers stating themselves as more appreciative of validation. Similarly, no significant differences have been found between high level cognitive modelling researchers and computational neuroscience researchers on their stated level of use of software engineering practices. However, researchers with larger team sizes use validation and verification more than those in smaller teams or working alone and larger user bases enhance the researchers’ use of issue and bug tracking.Item AN INVESTIGATION OF THE IMPACT OF DIFFERENT DATA CLEANING TECHNIQUES ON METRIC RESULT QUALITY IN MACHINE LEARNING(2022-06-14) ABBAS, Israa Mustafa; TOKER, SacipEnormous growth of data due to e-commerce platforms and online applications has posed a big challenge for data analysis and processing. It is now a frequent practice for e-commerce web sites to enable their customers to write reviews of products that they have purchased. Such reviews provide valuable sources of information on these products. A product review has important data source for sentimental analysis is used in all online product firms. This huge volume of data influence leads to a great challenge. These datasets, however, contain different data’s issues. Typically, different data mining technique used in before deploying data in many cases. Spatially, in supervised machine learning models trained on historical and labelled data to predict unseen data, data that a model has never learned before. In this thesis, we focused on design of experiment study in machine learning too [1]. We applied Ronald Fisher theories [2] regularly to find cause- effect relationship .For carry out this design of experimental study, we chose supervised machine learning classification algorithms with sentimental analysis, it is an approach to natural language processing (NLP).This is a popular way for organizations to determine and categorize opinions about a product, service .It involves the use of data mining, machine learning and artificial intelligence to mine text for sentiment and subjective information [3].This study established with Multinominal Naïve Bays ,Random Forest and Logistic Regression to analysis impact of five experimental groups (duplicate data ,punctuation mark ,stop words, limmatezr, TF-IDF transform ) and compare with one control group (no data cleaning applied). To determine the impact experimental group on three models’ efficiency and classification ratio and explain the interesting observations. A simulation done on 353 projects chosen randomly from Amazon product review dataset from twenty-four different categories . Thus, Dataset was collected from Amazon.com by McAuley and Leskovec [4][5]. After collecting metric dataset, SPSS software used for analyzing. A repeated-measure ANOVA was performed to examine this research question and the descriptive statistics of metric used. Analysis result shows there are different impact for data cleansing on machine learning models performance . data cleaning in same cases impacted positively on Random Forest and negatively in Multinominal Naive Bays and Logistic Regression. In other cases, had no impact at all. In overall, experimental result showed Random Forest classifier more sensitive on data cleaning than Multinominal Naïve Bayes classifier and Logistic Regression classifier ,both algorithms get high classification score in un-cleaned data set. Moreover, the experiment results showed data issues behavior differ in machine learning model. We cannot consider data quality issues as irrelevant data in all machine learning algorithm. Analysis result will be explained in detail on result and discussion chapter 4 and 5.Item ANALYSIS OF FILTERING AND QUANTIZATION PREPROCESSING STEPS IN IMAGE SEGMENTATION(2013-08-14) ÇALAMAN, Seda; KOYUNCU, MuratThere is a series of processes to extract semantic information from an image and one of them is the image segmentation. Image segmentation splits the image into smaller parts (segments) such that each segment has similar features such as similar colors or textures. In this thesis, the effects of preprocessing methods on image segmentation process are analyzed from different perspectives. Firstly, Peer Group Filtering, which is one of the preprocessing methods used before image segmentation, is applied on the images and its effect on image segmentation is analyzed. Peer Group Filtering algorithm is used to eliminate noises and to smooth color changes on images. Secondly, Lloyd’s quantization algorithm, which is another preprocessing method used before image segmentation, is applied and its contribution on image segmentation is investigated. Lloyd’s quantization algorithm reduces the number of colors in images. Finally, two different segmentation algorithms (fast scanning algorithm and JSEG algorithm) are compared using preprocessed images. Natural and synthetic images have been experimentally tested in this study. The results obviously indicate that after Peer Group Filtering preprocessing, segmentation achievement increases while run time of the segmentation decreases. On the other hand, the experiments related with the quantization show that, selected quantization level is very important to get benefit from Lloyd’s quantization algorithm. If correct quantization level is selected, then quantization helps segmentation process.Item ANALYZING THE ADMINISTRATIVE AND STAFF REQUIREMENTS OF E-GOVERNMENT SERVICES AND PRIORITIZING THE DEPLOYMENT WITH MODULAR DESIGN: IRAQI CORRECTION SERVICES CASE STUDY(2022-02-28) Alameri, Mohammed; Bostan, Atila; Akman, İbrahimIraqi Correction Services (ICS) is a department within the Ministry of Justice. Most services are provided through traditional paper and pen system. This system causes delays in service provision time and work over-load on ICS staff which translates into a high cost of service provision. Analysis of the collected data showed the priority should be given to the following services when automating ICS services: Requests of permission for official work, Sharing inmate status information, Rehabilitation services for released inmates, Requests for inmates‟ visits, Legal email services, Informatory services for inmate families, announcement of ICS responsibilities, Controlled and filtered email services between inmates and their families. The results of the analysis also point to the web-based service delivery as the most preferred user interface in automation of the services.Item APPLYING USER-CENTERED DESIGN TO M-LEARNING APPLICATION FOR ATILIM UNIVERSITY LECTURES(2022-02-25) Kartal, Kağan; Ertürk, Korhan LeventToday IT systems can be developed by creating an interaction between human and computer but these systems are not examined enough about usability of the systems. Usability is a term refers to user experience. It is explained by ease of use of a product. A highly usable system provides some benefits for both users and businesses such as efficiency and quality. Also this is related with User-Centered Design and Design-Based Research. User-Centered Design (UCD) is a design process, which focuses on users in development lifecycle who will use the product. The focus area of UCD helps to get exact requirements from users. To make successful UCD process, Design-Based Research methodology is the best, which has a similar lifecycle in process. System Usability Scale (SUS) is a measuring tool to get a scale about usability of a system. It is mentioned as “quick and dirty” yet it is reliable. SUS provides an evaluation of usability of a system by 10 statements and it gives a score over 100. In this study, an M-Learning application was developed with high usability. UCD was integrated with Design-Based Research methodology in development process and the system was evaluated with SUS. As the result of this study, UCD and Design-Based Research methodology was integrated and applied successfully. The M-Learning application, which is developed in this study, is evaluated by users with using SUS and the application got score 90,25 over 100.Item AUDITOR TECHNOLOGY AND PRIVACY CONTROL TO SECURE E-LEARNING INFORMATION ON CLOUD STORAGE(2017-01-07) AL-KHAFAJI, Khalid Muhammad Kareem; ERYILMAZ, MELTEMThe aim of this study, proposal to establish and applying a safe mechanism is: Technique Relationship Protected (TRP), that use a privacy control with the Auditor on the information shared between two of the most important services offered by modern technology, namely the E-learning platforms and service of cloud storage, as a matter of fact these technologies required relevant features in the databases made up of information that need safety. This study aims to shed light on the concept of ensuring privacy to protect information shared between the E-learning system and Cloud platform by proposing a mechanism to preserve the privacy of quite distinctive that supports public scrutiny on the information shared between these important technologies. It is worth mentioning that the study has a tendency to need to pick advantage of looping the signatures to verify the authenticity of the information required to review the validity the information shared encrypted. With our mechanism, entity location on each block in the joint information is unbroken figure of the year investigators, administrative body is able to check the efficiency of the safety of the common information, without retrieving the entire file. The proposed system keep a privacy of auditing mechanism for shared information with cloud keeping-safe. With use AES (Advanced Encryption Standard) for protecting the shared information by applying an encryption mechanism on the information that is being uploaded from its owners. They have been certified by the E-learning system administrators, and the application of a decrypt the encrypted information after user's requested, they trusted users has E-learning system after sponsorship by the owners of the information they need. We have to take advantage of ring autographs to build homomorphism authenticators so that a general verifier has the capacity to audit shared information integrity without retrieving the complete data or access to content. Addition to it cannot recognize who's the signer on file, this means that the power to administrator to reveal the identity of the signer. As well as the presence of additional functions in the work of the Auditor that contribute to the consolidation of this technique relationship protected. This is for the protect and the success of that relationship between the technical services which has become an urgent necessity in the modern technical world that contains a very massive amount of data and information those shared and transmitted daily between branches of our technological world.Item AUTOMATIC SPIRULINA DETECTION USING IMAGE PROCESSING TECHNIQUES(2022-02-14) SIDDIK, Othman; BOSTAN, AtilaIn this thesis, a study on automatic detection of spirulina is presented. Spirulina is an algae microorganism with 4 species which are quite useful for the determination and monitoring of water quality. Thesis contribution is to develop an automatic process for helping the diagnosis Spirulina in water, most of the Spirulina can be diagnosed by the size and shape from microscopic images, all algae detection that has to be diagnosed in a fast and accurate way is very critical for the water quality, manual methods are used to detect spirulina. This can give rise to inaccurate results. It is also very tedious effort to detect algae within water microscopic images. Automatic detection of spirulina is a challenging task due to factors such as change in size and shape with climatic changes, growth periods and water contamination. Nowadays, the automated detection of spirulina is one of the most fervent topics in applied biology. On the other hand, Deep-Learning and Convolutional Neural Networks (CNN) is yielding better results and is a judiciously used technique for image classification and for a variety of problems. This thesis introduces CNN into the automated spirulina detection problem in order to demonstrate whether it would succeed in solving the spirulina detection problem. A comprehensive dataset was specifically prepared using an artificial image generation method out of original images that are collected from rivers and lakes in Turkey. In this study, a spirulina image data set was prepared using a customized technique for artificial image generation. Consequently, a dataset covering different illumination conditions was computationally augmented to 1000 sample images. Original images were collected from rivers and lakes in Turkey. In this thesis, the background to the spirulina detection problem, the methodology used in the study and the results of image processing and feature extraction methods to locate and extract spirulina in a microscopic image are reported. Initially, the RGB image format with morphological operations were employed to detect spirulina in a microscopic image. As a result with a rate of 84% accuracy detection was observed. Afterwards, three different methods were experimented with for comparison purposes. The methods and their relative detection success rates were observed as follows: SURF 63%, FAST feature detection 67%, CNN 99% result accuracy rate, consequently, some future work is also suggested to improve the study further. In this thesis, we introduced CNNs into the automated spirulina detection problem. A CNN method used to solve 4 class spirulina detection problem. Observed results were discussed and compared with those of previous studies. To the best of our knowledge and survey results on the literature, this is the first study to employ CNNs in the automated spirulina detection problem.Item COMPARISON OF QUALITY OF SERVICE (QoS) SUPPORTS IN IPV4 AND IPV6(2015-06-25) AL-FAYYADH, Hayder; KOYUNCU, MuratProviding guaranteed services in the current Internet has become extremely essential to fulfill the requirements of Internet users. With an increase in the number of users and demand for multimedia applications like video streaming, VoIP and video conferencing, larger bandwidth requirement increases drastically since such applications are very sensitive to delay, packet loss, and jitter. However, it is not feasible to provide enough bandwidth to satisfy such a high demand. Quality of Service (QoS) is an important network performance parameter having a significant impact on multimedia applications. IPv6 was designed to improve QoS supported by IPv4, as well as other improvements. In this thesis, we discuss quality of service and various parameters affecting it on the Internet. We compare IPv4 and IPv6 protocols in terms of their performance for multimedia applications. In this scope, various queuing algorithms (First in First Out FIFO, Priority Queuing-PQ, and Weighted Fair Queuing-WFQ), which are typical scheduling algorithms used by routers, are examined to see their effects for multimedia applications in IPv4 and IPv6 environments. In addition, integrated services (IntServ) and differentiated services (DiffServ), which are two important techniques developed to provide QoS on the Internet, are elaborated again in IPv4 and IPv6 environments. For comparisons, a simulation framework based on OPNET Modeler is used for modeling, simulating and analyzing network behaviors. The results obtained from the simulations clearly show that the use of IPv6 with different QoS techniques has better performance in terms of jitter and delay compared to IPv4 protocol.Item COMPARISON OF SCHEDULING USED IN BIG DATA FRAMEWORKS(2022-02-24) Aljumaili, Saif; KARAKAYA, Ziya; YAZICI, AliBig Data applications have grown to become one of the main ingredients in the current information technology sector, providing an opportunity for decision-makers to achieve best outcomes, for instance in commerce and business. However, the speed at which such data gathering occurs varies in storage, management, and processing, in all of which the traditional database systems cannot handle such tasks as massive data collection. Resource management and task scheduling play an essential role in Big Data processing. There are different classifications of schedulers that are based on their different features, effectiveness, performance, and so on. However, in this thesis we classify, compare and investigate the detailed information associated with several schedulers being employed in Big Data frameworks. Moreover, this thesis identifies the weakness and strengths in different use cases of these schedulers. Furthermore, the study examines scenarios for the suitability of use cases so as to determine in which case the individual scheduler has some weakness or useless. Thus, these issues we cover in this thesis are not studied in the existing studies.
- «
- 1 (current)
- 2
- 3
- »