Department of Modeling and Design Engineering Systems
Permanent URI for this collection
Browse
Browsing Department of Modeling and Design Engineering Systems by Subject "computer engineering"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item A GENERALIZTION OF ARNOLD'S CAT MAP AND FRACTION BASED EMBEDDING IN IMAGE STEGANOGRAPHY(2022-02-15) Buker, Mohamed; Tora, Hakan; Gökçay, ErhanThe rapid development of data communication, and the increased amount of information that are communicated via networks, make it very important to find new ways to protect exchanged information. Encryption is one of the most widely used methods nowadays in this area. Steganography is a recent field of research in which the communicated information is being invisible to anyone rather than being only encrypted. The idea behind steganography is to hide the existence of information itself. As long as a third party knew there were information, whether encrypted or not encrypted, the information will be at risk. In this thesis, we present a steganographic model with two levels of security. First, the secret image is scrambled using our Generalized Arnold Cat Map (ACM). Then, the scrambled image is embedded into another image using our Fraction Based Embedding Technique (FBE) in the transform domain using both Discrete Wavelet Transform (DWT) and Lifted Wavelet Transform (LWT). The efficiency of our model was tested on benchmark color images. Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), Structural Similarity (SSIM) and correlation values are calculated. Results show that our Generalized ACM is more robust compared to standard and modified versions of ACM. At the same time, results of our new FBE technique performs better than those of other techniques regarding to PSNR and MSE values.Item A GENERIC ONTOLOGY CREATION TOOL: A CASE STUDY ON(2022-02-28) Yılmaz, Ekrem Çağlar; Turhan, Çiğdem; Güray, CenkTo retrieve any information from the Web, a search has to be performed on billions of documents which are unorganized, unstructured and unreadable by machines. To overcome this problem, the data on the Web has to be formalized in a machine readable format. One of the solutions is to use the Semantic Web technology which provides structure and meaning to data on the Web. In order to provide machine readable and semantically identified information, the Semantic Web technology utilizes ontologies which include resources, properties and their relations to identify metadata about data. The current ontology editors require expertise to create, organize, edit and manage ontologies. In this study, a generic ontology creation tool is developed for users with no expertise in ontology creation. The tool which can be easily and effectively used at every level of a business gathers information about the ontology from a non-expert providing step by step guidance with user interfaces.The aim is to enable any employee of a firm to create an ontology in their domain to be able to share information in machine-readable form with the rest of his/her company or other companies. The tool is tested on users who have different working experiences in terms of years and sectors of business. The results are evaluated with statistical methods which show that on the average, the users are satisfied with the tool and are able to create ontologies in their own domains.Item A GENERIC ONTOLOGY CREATION TOOL: A CASE STUDY ON BUSINESS SECTORS(2022-03-01) Yılmaz, Ekrem Çağlar; Turhan, Çiğdem; Güray, CenkTo retrieve any information from the Web, a search has to be performed on billions of documents which are unorganized, unstructured and unreadable by machines. To overcome this problem, the data on the Web has to be formalized in a machine readable format. One of the solutions is to use the Semantic Web technology which provides structure and meaning to data on the Web. In order to provide machine readable and semantically identified information, the Semantic Web technology utilizes ontologies which include resources, properties and their relations to identify metadata about data. The current ontology editors require expertise to create, organize, edit and manage ontologies. In this study, a generic ontology creation tool is developed for users with no expertise in ontology creation. The tool which can be easily and effectively used at every level of a business gathers information about the ontology from a non-expert providing step by step guidance with user interfaces.The aim is to enable any employee of a firm to create an ontology in their domain to be able to share information in machine-readable form with the rest of his/her company or other companies. The tool is tested on users who have different working experiences in terms of years and sectors of business. The results are evaluated with statistical methods which show that on the average, the users are satisfied with the tool and are able to create ontologies in their own domains.Item AUTOMATIC SPIRULINA DETECTION USING IMAGE PROCESSING TECHNIQUES(2022-02-14) SIDDIK, Othman; BOSTAN, AtilaIn this thesis, a study on automatic detection of spirulina is presented. Spirulina is an algae microorganism with 4 species which are quite useful for the determination and monitoring of water quality. Thesis contribution is to develop an automatic process for helping the diagnosis Spirulina in water, most of the Spirulina can be diagnosed by the size and shape from microscopic images, all algae detection that has to be diagnosed in a fast and accurate way is very critical for the water quality, manual methods are used to detect spirulina. This can give rise to inaccurate results. It is also very tedious effort to detect algae within water microscopic images. Automatic detection of spirulina is a challenging task due to factors such as change in size and shape with climatic changes, growth periods and water contamination. Nowadays, the automated detection of spirulina is one of the most fervent topics in applied biology. On the other hand, Deep-Learning and Convolutional Neural Networks (CNN) is yielding better results and is a judiciously used technique for image classification and for a variety of problems. This thesis introduces CNN into the automated spirulina detection problem in order to demonstrate whether it would succeed in solving the spirulina detection problem. A comprehensive dataset was specifically prepared using an artificial image generation method out of original images that are collected from rivers and lakes in Turkey. In this study, a spirulina image data set was prepared using a customized technique for artificial image generation. Consequently, a dataset covering different illumination conditions was computationally augmented to 1000 sample images. Original images were collected from rivers and lakes in Turkey. In this thesis, the background to the spirulina detection problem, the methodology used in the study and the results of image processing and feature extraction methods to locate and extract spirulina in a microscopic image are reported. Initially, the RGB image format with morphological operations were employed to detect spirulina in a microscopic image. As a result with a rate of 84% accuracy detection was observed. Afterwards, three different methods were experimented with for comparison purposes. The methods and their relative detection success rates were observed as follows: SURF 63%, FAST feature detection 67%, CNN 99% result accuracy rate, consequently, some future work is also suggested to improve the study further. In this thesis, we introduced CNNs into the automated spirulina detection problem. A CNN method used to solve 4 class spirulina detection problem. Observed results were discussed and compared with those of previous studies. To the best of our knowledge and survey results on the literature, this is the first study to employ CNNs in the automated spirulina detection problem.Item CONCEPTUAL DESIGN OF E- GOVERNANCE IN DISASTER MANAGEMENT SYSTEM(2022-01-25) IBRAHIM, Thaer; MISHRA, Alok; BOSTAN, AtilaDisasters pose a real threat to the lives and property of citizens; therefore, it is necessary to reduce their impact to the minimum possible. In order to achieve this goal, a framework for enhancing the current DMS was proposed, called Smart Disaster Management System (SDMS). The smart aspect of this system is due to the application of the principles of Information and Communication Technology (ICT), especially the Internet of Things (IoT). All participants and activities of the proposed system were clarified by preparing a conceptual design by using The Unified Modeling Language (UML) diagrams (both, use-case and activity diagrams). This effort was made to overcome the lack of citizens’ readiness towards the use of ICT as well as increase their readiness towards disasters. Iraq was chosen as a case study for this research. The lack of readiness on part of Iraqi citizen was inferred by using two different methods, interviews with experts in the field of disasters and experts in the field of ICT. The other method was based on distributing a questionnaire form to the target sample.Item DESIGN AND IMPLEMENTATION OF A PARALLEL BOUNDARY ELEMENT METHOD SOLUTION FOR 3D PARTICLE FLOW PROBLEMS IN MICROCHANNELS(2015-01-30) KARAKAYA, Ziya; BARANOĞLU, Besim; YAZICI, AliA new formulation for tracking multiple particles in slow viscous flow for microflu idic applications is presented. The method employs the manipulation of the boundary element matrices so that a system of equations is obtained relating to the rigid body velocities of the particle to the forces applied on the particle. The formulation is spe cially designed for particle trajectory tracking and involves successive matrix multi plications for which Symmetric Multiprocessing (SMP) parallelisation is applied. It is observed that the present formulation offers an efficient numerical model to be used for particle tracking and can easily be extended for multiphysics simulations in which several physics are involved.Item NEURAL NETWORK BASED FEATURE EXTRACTION FOR HANDWRITTEN DIGIT RECOGNITION(2017-01-07) Günler Pirim, Mine Altınay; Tora, Hakan; Öztoprak, KasımIn this dissertation, it is proposed that hidden layer output weights of semi-trained neural network to be used as feature vectors. In pattern recognition neural network is a training algorithm which provides classification. In this thesis in addition to this fact, it has been shown that semi-trained neural network can be used as a tool to extract hidden layer output vectors that are used as features of the image. The system is mainly composed of three steps: preprocessor, feature extractor, and classifier. Only the classifier layer differs for each experiment, the other two layers are used as default for all experiments. Support vector machine, neural network, and Euclidean distance classifiers are utilized. The experiments were conducted on MNIST and USPS benchmark datasets to evaluate the performance of the proposed approach.Item STUDY OF WORD EMBEDDING RULES AND MACHINE LEARNING BASED TEXT CLASSIFICATION(2022-01-26) AUBAID, Asmaa; Mishra, Alok; GÖRÜR, AbdülkadirWith the growth of online information and the sudden growth in the number of electronic documents provided on the Web and in digital libraries, there is difficulty in categorizing text documents. Therefore, embedding, rule-based and machine learning approaches are the best solutions to this problem as the rule-based approach is considered to be one of the most flexible methods by which the black box of the process of the text classification technique can be shown. The details of a process of classification can be seen and it can add some tools or new instructions to obtain good results. This approach has high value for information retrieval, e-governments, information filtering, text databases, digital libraries, and other applications. The problem of the embedding technique and generating rule-based is very significant for text categorization. The general idea of any embedding technique is to determine the importance of keywords using a technique that can keep informative words and remove non-informative words, which can then help the text-categorization engine to categorize a document into a category. This thesis deals with the rule-based approach using the embedding technique for the word to vector (word2vec) and document to vector (doc2vec) approaches. It will use these two techniques to prepare keywords depending on the computation of similarity. After that, we use those keywords to apply the rule-based approach for a classifier to achieve to the best performance of the system by computing performance evaluation measures such as accuracy, recall, precision, and F-Measures. Experiments were performed on the Reuter 21578 and 20 Newsgroups datasets to classify the top ten categories of Reuter 21578 and 20 Newsgroups datasets. The Python language was used to create a rule-based approach followed by the overall effectiveness of the approach being measured with the F-Measure score, error rate, and accuracy. The results of rule-based with the embedding technique using the doc2vec model (d2vRule) in the case of the Reuter 21578 dataset were 79% precision, 75% recall, 76.75% F-Measures, 9.28% error rate and 90.72% accuracy measurements. For the 20 Newsgroups dataset, the results were 76% precision, 66.64% recall, 70.98% F-Measures, 9.93% error rate and 90.07% accuracy measurement. In addition, when the machine learning algorithms J-RIPPER (JRip), One Rule (OneR) and ZeroR were applied to the Reuter 21578 dataset, we obtained F-Measures and accuracy metrics of 0.713 − 0.752, 0.506 − 0.598 and 0.219 − 0.39 for JRip, One R and ZeroR, respectively. In addition, when applying those algorithms to our dataset, there was agreement and it appeared that our algorithm (d2vRule) performed better than these three algorithms mentioned above. Moreover, it provides a good classification process according to the evaluation metrics. On the other hand, when using the embedding technique with the word2vec model, it is predictable that these results depended on precision, recall and F-Measures approaches. Finally, it is clear that our rule-based approach is better than the results of machine learning, namely Naïve Bayes, Naive Bayes Updateable, Rules.DecisionTable, Lazy. IBL and Lazy.IBK. When it is validated for our rule-based (w2vRule), it can be seen that the rule-based (RB) classifier of a certain reference has the highest accuracy with 82.19% of correctly classified instances, while Decision Tree (DT), Support Vector Machine (SVM), Random Forest (RF), and Bayes Net (BN) have accuracies of 81.72%, 81.49%, 81.19%, and 77.85%, respectively, and the Temporal Specificity Score (TSS) classifier correctly classified 77.19% of instances referenced. However, our word-to-vector rule-based classifier (w2vRule) has an observed level of measurements in the case of the Reuter 21578 dataset were 73% precision, 77.71% recall, 75.09% F-Measures, 10.09% error rate and 89.91% accuracy. Therefore, it achieved the best result when we compared it with previous rule-based and machine learning classifiers.