Department of Computer Engineering
Permanent URI for this collection
Browse
Browsing Department of Computer Engineering by Issue Date
Now showing 1 - 20 of 41
Results Per Page
Sort Options
Item SITUATIONAL METHOD ENGINEERING FOR REQUIREMENT ENGINEERING PHASE(2009-05-30) AYDIN, Seçil; MISHRA, DeeptiThis thesis focuses on requirements engineering phase and reviews the existing requirement engineering methods and compares them according to the constraints in the software projects. It has been found that some techniques are better suited to particular project teams and circumstances. Besides, methods are normally general in nature and they can not be used directly without adapting them according to the characteristics of the project. This is the concern of situational method engineering, where the term situational method is used to refer to a method tailored to the needs of a particular development setting. A criterion methodology is established to distinguish requirement engineering methods from each other according to different characteristics of the project. A tool is implemented to store different methods according to this criterion methodology by using Situational Method Engineering. This tool is compared with other tools that exist in the literature. The tool is published and validated by collecting data from the industry. The results gathered from the industry are presented and discussed for the improvement of proposed approach.Item A METHODOLOGICAL APPROACH FOR SERIOUS GAME SOFTWARE DEVELOPMENT: AN APPLICATION FOR LANGUAGE DISORDERS(2012-01-25) ÇAĞATAY, Mehmet; EGE, Pınar; ÇAĞILTAY, NergizThe computer software has been actively used in education area in different ways today. However, for several reasons educational institutions are failing to integrate this software to current educational environments. Educational institutions have been criticized for using technologies similar to the ones used hundred years ago. We believe that, one of the reasons for this failure, integration of educational software technologies into current educational environments, is the complexity of these systems. Hence developing efficient software by addressing the real life problems is a complex process. There are various software development methodologies especially for complex software, with regular and planned development processes. So far, these software development methodologies are appropriate for almost all software, though, in terms of unique needs and developmental process of educational software, they may be inadequate. In other words, development of the educational software process requires some other considerations, such as the domain experts that are not considered during the development process of commercial software projects. In this thesis, a new educational software development methodology by involving the domain experts and their interactions with the end users is recommended. Additionally, this software development methodology is used in a serious game development process that supports the therapy process of children with impaired speech and language. Primarily in this study, the contribution of the serious game on the current therapy sessions is evaluated which is developed by using the proposed educational software development methodology. It is aimed to better address the problems of current therapy sessions by developing the software according to the new methodological approach. In other words, this study is a case study to show how the proposed methodology is applied on the development process of a serious game as well as its impact on current therapy sessions.Item A DATABASE DESIGN METHODOLOGY FOR COMPLEX SYSTEMS(2013-07-14) TOPALLI, Damla; ÇAĞILTAY, NergizThe quality of the software is directly related to addressing the users' needs and their level of satisfaction. To reflect user requirements to the software processes, correct design of the database model provides a critical stage during software development. Database design is a fundamental tool for modeling all the requirements related to users' data. The possible faulty conditions in database design have adverse effects on all of the software development processes. The possible faulty conditions can also cause continuous changes in the software and the desired functionality of the targeted system which may result in user dissatisfaction. In this context, reflecting the user requirements accurately in the database model and understanding of the database model correctly by every stakeholder involved in the software development process is the factor that directly affects the success of the software systems. In this study, a two-stage conceptual data modeling approach is proposed to reduce the level of complexity, to improve the understandability of database models and to improve the quality of the software. This study first describes the proposed two-stage conceptual data modeling. Then the proposed method’s impact on software engineers’ comprehension is also investigated and the results are examined. Results of this study show that, the proposed two-stage conceptual modeling approach improves the understanding level of software engineers and eliminates possible defects in this stage.Item ANALYSIS OF FILTERING AND QUANTIZATION PREPROCESSING STEPS IN IMAGE SEGMENTATION(2013-08-14) ÇALAMAN, Seda; KOYUNCU, MuratThere is a series of processes to extract semantic information from an image and one of them is the image segmentation. Image segmentation splits the image into smaller parts (segments) such that each segment has similar features such as similar colors or textures. In this thesis, the effects of preprocessing methods on image segmentation process are analyzed from different perspectives. Firstly, Peer Group Filtering, which is one of the preprocessing methods used before image segmentation, is applied on the images and its effect on image segmentation is analyzed. Peer Group Filtering algorithm is used to eliminate noises and to smooth color changes on images. Secondly, Lloyd’s quantization algorithm, which is another preprocessing method used before image segmentation, is applied and its contribution on image segmentation is investigated. Lloyd’s quantization algorithm reduces the number of colors in images. Finally, two different segmentation algorithms (fast scanning algorithm and JSEG algorithm) are compared using preprocessed images. Natural and synthetic images have been experimentally tested in this study. The results obviously indicate that after Peer Group Filtering preprocessing, segmentation achievement increases while run time of the segmentation decreases. On the other hand, the experiments related with the quantization show that, selected quantization level is very important to get benefit from Lloyd’s quantization algorithm. If correct quantization level is selected, then quantization helps segmentation process.Item COMPARISON OF QUALITY OF SERVICE (QoS) SUPPORTS IN IPV4 AND IPV6(2015-06-25) AL-FAYYADH, Hayder; KOYUNCU, MuratProviding guaranteed services in the current Internet has become extremely essential to fulfill the requirements of Internet users. With an increase in the number of users and demand for multimedia applications like video streaming, VoIP and video conferencing, larger bandwidth requirement increases drastically since such applications are very sensitive to delay, packet loss, and jitter. However, it is not feasible to provide enough bandwidth to satisfy such a high demand. Quality of Service (QoS) is an important network performance parameter having a significant impact on multimedia applications. IPv6 was designed to improve QoS supported by IPv4, as well as other improvements. In this thesis, we discuss quality of service and various parameters affecting it on the Internet. We compare IPv4 and IPv6 protocols in terms of their performance for multimedia applications. In this scope, various queuing algorithms (First in First Out FIFO, Priority Queuing-PQ, and Weighted Fair Queuing-WFQ), which are typical scheduling algorithms used by routers, are examined to see their effects for multimedia applications in IPv4 and IPv6 environments. In addition, integrated services (IntServ) and differentiated services (DiffServ), which are two important techniques developed to provide QoS on the Internet, are elaborated again in IPv4 and IPv6 environments. For comparisons, a simulation framework based on OPNET Modeler is used for modeling, simulating and analyzing network behaviors. The results obtained from the simulations clearly show that the use of IPv6 with different QoS techniques has better performance in terms of jitter and delay compared to IPv4 protocol.Item COMPARISON OF VARIOUS TRANSITION MECHANISMS FROM IPv4 TO IPv6(2015-06-25) AL-FAYYADH, Faris; KOYUNCU, MuratIPv4 which is the old version of Internet Protocol has a new successor named IP Next Generation (IPng) or IPv6 developed by IETF. This new version is developed especially to resolve some issues of IPv4 like scalability, performance and reliability. Although new version is ready for usage, it is obvious that it will take years to transit fully from IPv4 to IPv6. We have to use these two versions together for a long time. Therefore, we have to investigate and know transition mechanisms that we can use during transition period to achieve a transition with minimum problem. This thesis analyzes present IP transition techniques. Here an attempt has also been made to make empirical evaluation of the three most generally used transition mechanisms which are Automatic 6to4 Tunneling, Manual 6in4 Tunneling and Dual Stack. The obtained test results are also compared with the results of native IPv6 and native IPv4 environments. Empirical evaluation is based on simulations which are carried out using OPNET simulation framework. The outcomes of the thesis are important for providing an insight for choosing an appropriate transition technique, an idea about network capacity planning and migration.Item EMOTION ESTIMATION FROM FACIAL IMAGES(2017-01-07) NAJAH, GOMA MOHAMED SALEM; Şengül, GökhanPrediction of emotions from facial images is one of the popular and active researches, and it’s implemented via many methods. In this thesis, the proposed system to predict emotions from facial expressions images contains several stages, first stage of this system is the pre-processing stage which is applied by detecting the face in images, then resizing the images, and then Histogram Equalization (HE) technique is applied to normalize the effects of illumination. The second stage is extracting features from facial expressions images using Histogram of Oriented Gradient (HOG), and Local Binary Pattern (LBP) feature extraction algorithms, which generates the training dataset and the testing dataset that contains expressions of Anger, Contempt, Disgust, Embarrass, Fear, Happy, Neutral, Pride, Sad, and Surprised. Then Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) classifiers are used for the classification stage in order to predict the emotion. In addition, Confusion Matrix (CM) technique is used to evaluate the performance of these classifiers. The proposed system is tested on JAFFE, KDEF, MUG, WSEFEP, TFEID and ADFES databases. However, the proposed system achieved prediction rate of 96.13% when HOG+SVM method is used.Item IRIS RECOGNITION BY USING IMAGE PROCESSING TECHNIQUES(2017-01-07) ALHAMROUNI, MOHAMED; Şengül, GökhanIris recognition system has become very important, especially in the field of security, because it provides high reliability. Many researchers have suggested new methods to iris recognition system in order to increase the efficiency of the system. In this thesis, various methods have been proposed to achieve high performance in iris recognition. In the proposed system, three feature extraction approaches, Histogram of Oriented Gradient (HOG), Gray Level Co-Occurrence Matrix (GLCM) and Local Binary Pattern (LBP) are used to extract the features from iris image. On other hand, two classifiers; K Nearest Neighbors (KNN) and Support Vector Machine (SVM) are used in the classification stage. The iris image passes through several stages before extracting features stage; first, pre-processing stage which includes image resizing that unifies all images' size, second, segmentation stage which determines the iris region in eye image, finally, normalization stage which converts the iris region to suitable shape with specific dimensions. The proposed methods have been applied on two iris databases, UPOL and IITD. However, the proposed system achieved recognition rate of 100% when HOG+KNN method is used.Item AUDITOR TECHNOLOGY AND PRIVACY CONTROL TO SECURE E-LEARNING INFORMATION ON CLOUD STORAGE(2017-01-07) AL-KHAFAJI, Khalid Muhammad Kareem; ERYILMAZ, MELTEMThe aim of this study, proposal to establish and applying a safe mechanism is: Technique Relationship Protected (TRP), that use a privacy control with the Auditor on the information shared between two of the most important services offered by modern technology, namely the E-learning platforms and service of cloud storage, as a matter of fact these technologies required relevant features in the databases made up of information that need safety. This study aims to shed light on the concept of ensuring privacy to protect information shared between the E-learning system and Cloud platform by proposing a mechanism to preserve the privacy of quite distinctive that supports public scrutiny on the information shared between these important technologies. It is worth mentioning that the study has a tendency to need to pick advantage of looping the signatures to verify the authenticity of the information required to review the validity the information shared encrypted. With our mechanism, entity location on each block in the joint information is unbroken figure of the year investigators, administrative body is able to check the efficiency of the safety of the common information, without retrieving the entire file. The proposed system keep a privacy of auditing mechanism for shared information with cloud keeping-safe. With use AES (Advanced Encryption Standard) for protecting the shared information by applying an encryption mechanism on the information that is being uploaded from its owners. They have been certified by the E-learning system administrators, and the application of a decrypt the encrypted information after user's requested, they trusted users has E-learning system after sponsorship by the owners of the information they need. We have to take advantage of ring autographs to build homomorphism authenticators so that a general verifier has the capacity to audit shared information integrity without retrieving the complete data or access to content. Addition to it cannot recognize who's the signer on file, this means that the power to administrator to reveal the identity of the signer. As well as the presence of additional functions in the work of the Auditor that contribute to the consolidation of this technique relationship protected. This is for the protect and the success of that relationship between the technical services which has become an urgent necessity in the modern technical world that contains a very massive amount of data and information those shared and transmitted daily between branches of our technological world.Item AN ADAPTIVE EDUCATIONAL MODEL FOR FLIPPED CLASSROOM(2017-03-07) Ahmed, Aisha Abdulaali Abdulla; ERYILMAZ, MELTEMThis study aimed to develop a flipped classroom demonstrate utilizing versatile innovations for primary school understudies and identify the individual contrasts among the third-grade basic understudies in the English Language in Libya by a versatile method in flipped learning, and customary training at three levels recalling, comprehension and applying of Bloom's Taxonomy independently. This study attempted to answer the following question: Are there any differences between traditional style of education, flipped learning , and adaptive technique in flipped learning in achievement tests according to Bloom's Taxonomy ( Remembering, Understanding, and Applying) for understudies in the third grade of essential in English ? To accomplish the targets of the review and answer its question, three tests were constructed and afterward ensure its earnestness and steadiness in proper ways, and selected the study sample and divided it randomly into three gatherings:- 1. The experimental group (1) educated by adaptive technique in flipped learning. 2. The control group (1) instructed through customary training. 3. The control group (2) (The trial bunch (2)) instructed by flipped learning. The review presumed that in the pre-test the gatherings were homogeneous; however, in midterm test and post- test, there were contrasts measurably huge for the experimental assembly (1).Item THE ROLE OF USING MOBILE SOCIAL MEDIA LEARNING IN LIBYAN HIGHER EDUCATION(2017-05-02) Alhadad, Salha; Ertürk, Korhan LeventThe use of social media (e.g., Facebook, Twitter, YouTube, Google+) by smartphones and the using of mobile learning applications in education increased day by day, and this reflects the importance of information and communication technologies ICT and its active role in the development of methods and tools of education, and enhance learning among students as well as promote active learning for them, where they have become the most important technology education tools. Mobile apps and the use of social media created opportunities for interaction and collaboration among students, as well as allowed students to engage in creating content and communication by using social media, Web 2.0 tools and mobile web 2.0. In this quantitative study was employed utilizing survey method to presents a portion of the findings on students' perceptions of learning with mobile devices and the role of social media in Libyan higher education. Data were collected through two surveys. In this study was designed mobile learning application based on the results of the surveys, to introduce the concept of Mobile social media learning to the Libyan Higher Education to develop its tools and old methods. The initial experimental results are very positive and this reflects the importance of the mobile applications and social media in developing and increasing the educational performance of students.Item THE AWARENESS TO CREATE THE DIGITAL RESOURCE LIBRARY BASED ON RESPONSIVE WEBSITE DESIGN(2017-05-02) Elbaraasi, Asma; Ertürk, Korhan LeventThis research study analyzed the awareness of digital resource library based on the responsive web design. The background of the present study is signifying that digital resource library has been significant impact on the user experience and enhances the knowledge and opportunity to get more information from anytime and anywhere. The rapid development and growth in the online website the importance of responsive web design provides the significant impact on the user to get access on the digital library through several kinds of IT devices such as PCs, tablet, mobile and others. A quantitative research methodology was used by researcher to find out the research questions answers. This present study research analysis projects lead several research advantages such as responsive web design, digital devices, user interface, display format, search features, storage, processing, flexibility, availability and reduce cost. While digital library system at Higher Institute of Education is moderately taking place in the academic system, development, library research and activities are most effective elements of education. The present digital library system also includes images, video, maps, audio, academic resources, documents, personal record and combination of other relevant material. The study shows that RWD significantly address the challenges of in today’s web development and effectiveness for the digital resources library in order to improve the user experience more effectively and efficiently.Item ABSTRACTIVE TEXT SUMMARIZATION USING DEEP LEARNING(2022-01-11) ABBAS, HANAN WAHHAB; YILDIZ, BeytullahThe ability to produce summaries automatically helps to improve knowledge dissemination and retention, as well as efficiency in a variety of fields.There are basically two approaches to summarizing, abstractive and extractive. The abstractive approach is considered more successful as it is the process of creating a brief summary of the source text to capture the main ideas. In this approach, summaries created from the source text may contain new phrases and sentences not included in the original text. The use of attention-based Recurrent Neural Networks encoder-decoder models has been popular for a variety of language-related tasks, including summarization and machine translation. Recently, in the field of machine translation, the Transformer model has proven to be superior to the Recurrent Neural Networks-based model. In this thesis, we propose an improved encoder-decoder Transformer model for text summarization. As a baseline model, we used Long Short-Term Memory with attention, a Recurrent Neural Networks model, for the abstractive text summarization task. Evaluation of this study is performed automatically using the ROUGE score. Experimental results show that the Transformer model provides a better summary and a higher ROUGE score.Item STUDENT ACHIEVEMENT PREDICTION BASED ON ARTIFICIAL NEURAL NETWORK VERSUS FUZZY LOGIC(2022-01-14) Al-Khafaji, Mustafa; ERYILMAZ, MeltemE-learning currently represents great importance in the process of developing the educational process in all stages from the primary classes to the postgraduate classes, as it provides an interactive graphical environment that is easy to deal with, as it attracts students to it with ease and makes them interact with it. This study, used artificial intelligence techniques, represented by both the neural network and fuzzy logic, to predict student achievement in the final exam who use the E-Learning Management System. The dataset used in this study was taken from an Iraqi engineering college, and it represents data of 200 students who have enrolled in the computer science course. The data were (gender, age, resources downloaded, videos viewed, discussion chat joined, midterm1 score, midterm2 score, final exam score). The type of artificial neural network used was pattern neural network. Levenberg-Marquardt's algorithm was used to train the neural networks. For the fuzzy logic Sugeno fuzzy inference system was used. The study results were promising and good as the results showed that the students who spend more time on the learning system have the most success rate. In this study, the neural network trained, tested, and all the results were recorded, where the accuracy of the results was 73%. The same thing for the fuzzy logic technique where the results were more accurate, as the average percentage of accuracy results was 88%.Item New Greedy Algorithms to Optimize the Curriculum-based Course Timetabling Problem(2022-01-14) Coşar, Batuhan Mustafa; SAY, Bilge; Dökeroğlu, TanselThis thesis presents a set of new greedy algorithms for the optimization of the well-known ”Curriculum-Based Course Timetabling” (CB-CTT) problem, which is a subtype of the ”Course Timetabling” problem. The main goal of the study is to minimize the total number of soft constraint violations while preserving the satisfaction of hard constraints (feasible solutions). Since the problem is NP-Hard and large instances of the problem cannot be solved in practical times, greedy algorithms that work to produce acceptable results in a few seconds are good alternatives to brute-force and evolutionary algorithms that spend hours of execution times to search for an optimal solution. Instead of using a single heuristic as it is performed by many greedy algo rithms, we define and execute 120 greedy heuristics on the same problem instance simultaneously and report the overall best result, which would produce better results than which is obtainable by using a single greedy heuristic algorithm. The best results with respect to the No Free Lunch Theory, which states that the costs of greedy heuristics should be comparable on average, are reported. Our proposed greedy algorithms use the Largest-First, Smallest-First, Best-Fit, Average-weight first heuristics, and the Highest Unavailable course-first heuristics simultaneously while assigning the courses to the available rooms that are ordered by their capacity according to the above four different criteria. In order to evaluate the performance of our proposed algorithm, we carry out experiments on 21 problem instances from the Second International Timetabling Competition (ITC-2007) benchmark set. The experimental results verify that the proposed greedy algorithms can report zero hard constraint vio lations (feasible solutions) for 18 problems with significantly reduced soft-constraint values.Item FAST HEADER MATCHING IN NETWORK PACKETS USING FIELD PROGRAMABLE GATE ARRAYS(2022-01-17) NASER, ANWER SABAH; Özbek, Mehmet EfeThe hardware architecture of the parallel process multiple RAM that emulates the behaviors of content addressable memory for packet classification is presented in this thesis. With the increase in Internet networks’ speed, the speed of detection of intruders has become a basic requirement. In this work, a packet header field is used in a fast and efficient way to detect intruders to prevent them from accessing the data. The application test results were fast and compatible when used the FPGA board technique from Xilinx. Finally, the design, synthesis of this parallel process multiple RAM packet header detector has been achieved using Vivado 2018.2 simulator, and coding is written in Verilog HDL language and Xilinx Artix-7 FPGA (Field Programmable Gate Array) kit was used.Item Reinforcement Learning for Intrusion Detection(2022-01-17) Saad, Ahmed Mohamed Saad Emam; Yıldız, BeytullahNetwork-based technologies such as cloud computing, web services, and Internet of Things systems are becoming widely used due to their flexibility and preeminence. On the other hand, the exponential proliferation of network-based technologies exacerbated network security concerns. Intrusion takes an important share in the secu rity concerns surrounding network-based technologies. Developing a robust intrusion detection system is crucial to solve the intrusion problem and ensure the secure delivery of network-based technologies and services. In this thesis, a novel approach was proposed using deep reinforcement learning to detect intrusions to make network applications more secure, reliable, and efficient. As for the reinforcement learning approach, Deep Q-Learning is used alongside a custom-built Gym environment that mimics network attacks and guides the learning process. A supervised deep learning solution using a Long-Short Term Memory architecture is implemented to serve as a baseline. The NSL-KDD dataset is used to create the reinforcement learning environment and to train and evaluate the baseline model. The performance results of the proposed reinforcement learning approach show great superiority over the baseline model and the other relevant solutions from the literature.Item FACE RECOGNITION USING IMAGE PROCESSING AND MACHINE LEARNING METHODS(2022-01-20) Rushdi, Iman Raad Rushdi; ŞENGÜL, GökhanThe human face is a complex multidimensional visual construct, which makes it very challenging to create a computational model for recognition. Basically; face recognition is a method of recognizing a person based on the image of his or her face and has become an important area of study, covering various subjects such as image processing, computer vision and machine learning. The main challenge with facial recognition is how to correctly identify the correct feature for facial detection. This study presents an approach for the recognition of the human face based on the features extraction from the image. The face recognition system has been applied on ORL and YALE datasets. The proposed method was initially implemented in three steps. For pre-processing phase, Discrete Wavelet Transform (DWT) with Daubechies transform was applied. At second step, feature extraction phase was implemented based on Local Binary Pattern (LBP) and Gray Level Co-Occurrence Matrix (GLCM). Third step, Euclidean Distance was implemented for classification phase. Moreover, the same experiments were implemented applying Particle Swarm Optimization (PSO) for feature selection approach. The study observed several conclusions: for the first experiments; implementation of DWT and LBP, when the number of training image increased; the performance rate has been increased too, rather than implementing DWT, LBP and GLCM methods that conducted 82.50% of recognition rate when implementing on ORL database and 90% when implementing the three methods on YALE database. However, the implementation of PSO algorithm have increased the accuracy rate up to 95% for ORL database and 93% on YALE database.Item DESIGNING AND DEVELOPING A DIGITAL LIGHT PROCESSING BASED STEREOLITHOGRAPHY 3D PRINTER(2022-01-20) Sahib, Mohammed; Tirkeş, Güzin; Akar, SametThe Digital Light Processing (DLP) 3D printer is a device using Computer-Aided Design (CAD) to produce a 3D model directly from the CAD model, by using the Additive Manufacturing (AM) technology. There are different ways of 3D printing technologies that vary in most aspects such as the material, the methods of forming the sample, the speed and accuracy and many other parameters. Choosing the right way of production depends on the required task so that the material and quality fits the level required. The DLP technology is recognized by its simple structure yet it provides a remarkable range of quality and flexibility. This research suggests a custom build that promises to provide a good production quality. Three types of experiments have been made with different CAD designs to ensure the ability of the designed 3D printer. These experiments have been successfully implemented in terms of surface quality, tiny details and accurate measurements. Furthermore two different tests were made to verify the desirable output results.Item FACE DETECTION BY MACHINE LEARNING ALGORITHMS(2022-01-20) Hamamchi, Ahmed Ameer Hamdi; Şengül, GökhanDetection of presence of the faces and non-faces in an image is the initial move of face implementations such as attitude of face (Localization), expression and discrimination. The aim of the face detection is to figure out whether a face appears in a picture or not. Knowing or finding out the location of the image where the face is present is one of the most critical step of any face detection processing system. The performed performance in a detection of face system immediately effects on right operating of applications reminded, because the faces are not static (not stationary) and vary widely in posture, colour, lighting condition and scale. It is difficult to plan and target an automatic system to beat all of the above-mentioned issues. The machine learning algorithms are therefore known as one of the successful implementing tools to create a well-performing face detection system. It can be called the issue of face detection a task for computer vision, which includes in detection of one or more than one faces of the human being in a picture. Detection of face is one of the considerable move of the analyses of face. In this thesis, a general review of the viola & jones, LBP (Feature extractor), K-NN, and SVM (Classifiers) algorithms in face detection are mentioned, which are used to elicit characteristics from the faces and then classify the faces, and build-up robust, efficient and reliable face detection system and advantages and dis-advantages of each method is explained briefly and in detail, then a decision for which approach is more precise and strong than the other ones is shown. Eventually a comparison and evaluation is made for the SVM, K-NN, LBP, and Viola & Jones based on the datasets utilized in our work consisting of more than thousands of images of faces and non-faces images. LBP, K-NN, SVM, and Viola & Jones methods seems to be convenient methods to be used for face detection due to its velocity, accuracy, ability of learning, and simplicity. The consequences of this study have been showed that the systems accuracy can be improved by incrementing the training images number. Two datasets of same and different dimensions are used in the study. Using the dataset of the same face dimensions, an accuracy of 85 % for LBP using SVM, 100 % for LBP using K-NN, and 88% for Viola & Jones are obtained for the faces. Using the dataset of the different dimensions for the face, an accuracy of 83 % for LBP using SMV, 57 % for LBP using K-NN, and 68 % for Viola & Jones are obtained for the faces.
- «
- 1 (current)
- 2
- 3
- »