<![CDATA[Computer Science and Information Technology]]> en-us 2025-12-17 04:49:39 2025-12-17 04:49:39 ZWWY RSS Generator <![CDATA[Comparative Analysis of Word Embedding Techniques Integrated with Machine Learning for Detecting Offensive Comments on Social Media]]> Source:Computer Science and Information Technology  Volume  13  Number  3  

Victor Thomas Emmah   Chizi Michael Ajoku   and Ibiere Boma Cookey   

The rapid growth of social media has enabled global communication and also increased the number of user-generated content. This, in turn, has amplified the spread of offensive, hateful, and abusive language online. This increasing prevalence of offensive comments has created a need for robust and automated moderation systems that will help in automatic detection of offensive comments crucial to maintaining healthy online discourse. This paper presents a comparison of word embedding-based approaches for detecting offensive content, leveraging deep learning techniques to enhance classification accuracy. Using pre-trained word embeddings such as GloVe and Word2Vec, the paper explores their effectiveness for training deep learning models, including Simple Neural Networks (NN), Long Short-Term Memory (LSTM) networks, and Convolutional Neural Networks (CNN). The models are trained on a dataset of tweets, applying preprocessing techniques such as tokenization, normalization, and data balancing to improve performance. Experimental results indicate that CNN performed better than the simple Neural Network and LSTM on both embedding models, achieving higher accuracies of 65% for Word2Vec and 84.7% for GloVe, thereby demonstrating its capability to capture linguistic patterns associated with offensive language. These results show that word embeddings with semantic understanding, when integrated with deep learning models, outperform traditional vectorization techniques in identifying offensive language with higher accuracy and recall. This highlights the significance of combining high-quality word embeddings with deep learning architectures for effective social media moderation.

]]>
Nov 2025
<![CDATA[A Review of Fuzzy Database Modeling Usage in the Modern World]]> Source:Computer Science and Information Technology  Volume  13  Number  2  

Amuomo Nixon   

Fuzzy database modelling is vital for current sectors like healthcare and finance because it optimizes the accuracy and flexibility of big data analysis for decision-making. The benefits of fuzzy database modelling, such as its capacity to manage imprecise and ambiguous data, much outweigh the challenges of its implementation, which include complexity and computational costs. The future of fuzzy database modelling is bright because of continuous technical improvements that may result in better data management features and ever-more complicated applications. In addition to highlighting how fuzzy database modelling fosters efficiency and innovation across various industries, this paper provides a comprehensive discussion on fuzzy database modelling, proposing how fuzzy logic concepts can be integrated into traditional database systems to manage uncertain and imprecise data. The work highlights the demerits of traditional database models in the management of uncertain data through extensive literature analysis and by comparing database models such as relational, object-oriented, NoSQL, and fuzzy database modelling. The advantages of implementing fuzzy database modelling in areas like medical diagnosis are illustrated using real-world applications such as in healthcare, finance, manufacturing and Decision Support Systems. This paper adopted a secondary data review of scientific articles, conference papers, books, and academic databases, whose objective was to gather accurate and useful information on fuzzy database model operations. A summary of the findings based on the inclusion criteria of peer-reviewed articles guided this review. Thus, the paper outlined insightful impacts and limitations of implementing fuzzy database modelling in the areas of data sets, query handling, data entity relationships, data uncertainties handling, and flexibilities in use in real-world applications.

]]>
Jun 2025
<![CDATA[Improving Network Security through Fuzzing Attack Detection: Machine Learning on the Kitsune Dataset]]> Source:Computer Science and Information Technology  Volume  13  Number  1  

Yousef Abuzir   and Dima Raed Abu Khalil   

The rapid growth of network traffic has made cybersecurity a critical concern for many organizations. The detection and classification of network attacks are becoming a complex and difficult task. The objective of this study was to explore and compare three machine learning techniques for detecting network intrusions. These techniques include Linear Discriminant Analysis (LDA), Logistic Regression, and hybrid deep learning models composed of Convolutional Neural Networks (CNNs), Gated Recurrent Units (GRUs), and Long Short-Term Memory networks (LSTMs). This study aimed to evaluate these models based on their classification accuracy, ability to handle class imbalance, and interpretability using Shapley Additive exPlanations (SHAP). The methodology involves training and testing the models on a network traffic dataset, followed by performance evaluation using metrics such as accuracy, precision, recall, F1-score, and AUC. Additionally, SHAP values were computed to assess feature importance and model interpretability. The results revealed that Logistic Regression offers reliable performance, achieving high accuracy and balanced precision and recall for both benign and malicious classes. The CNN-GRU-LSTM hybrid model achieved near-perfect accuracy, but with a significant computational cost and a high False Negative Rate. The LDA model performs well on benign traffic but struggles to detect malicious instances owing to class imbalance. In conclusion, although simpler models, such as Logistic Regression, provide high interpretability and robust performance, the CNN-GRU-LSTM model offers superior classification performance at the cost of increased computational complexity. This study highlights the importance of balancing the model performance with computational efficiency and interpretability. Future work should focus on addressing class imbalance, optimizing models for real-time detection, integrating external threat intelligence, and exploring transfer and continuous learning techniques to enhance model adaptability.

]]>
Mar 2025
<![CDATA[AI in Cyber Security: Innovations and Implications for a Safer Digital World]]> Source:Computer Science and Information Technology  Volume  13  Number  1  

Navin Kumar   

In the contemporary digital landscape characterized by rapid technological advancements and interconnected systems, the escalating prevalence and sophistication of cyber threats underscore the exigency for an innovative and adaptive paradigm in cybersecurity methodologies. This treatise expounds on the transformative capacity of Artificial Intelligence (AI) as an indispensable linchpin in the fortification of modern cyber defense architectures. Through the power of advanced computational models, complex algorithms, and leading-edge paradigms of machine learning, AI exhibits unmatched effectiveness in the rapid identification, proactive mitigation, and strategic preemption of a wide variety of cyber threats. This academic study focuses on the multiple applications of AI within the cybersecurity domain, ranging from identifying anomalous patterns which reflect malicious activity to automation in incident response mechanisms, minimizing human latency, and application of predictive analytics in preventing potential vulnerabilities. Concurrently, the discussion critically analyzes the intrinsic challenges and limitations of AI integration, such as algorithmic obscurity and bias, the intricacies of data privacy protection, and the deep ethical issues related to autonomous decision-making systems. This research is based on a synthesis of empirical case studies, cross-disciplinary insights, and theoretical frameworks, which posit the need for a synergistic paradigm-one that intricately intertwines AI-driven innovations with conventional cybersecurity practices. Such an approach will seek to inculcate an adaptive, resilient, and robust defense mechanism which can effectively counter the constantly evolving and increasingly malevolent cyber threats posed by digital adversaries in an evolving cyber space. This paper is hopeful to contribute to the current conversation on the future path for cybersecurity by arguing for strategic equilibrium that best exploits AI while critically considering its associated risks.

]]>
Mar 2025
<![CDATA[Early Heart Disease Prediction Using GAN-Augmented Data and Machine Learning]]> Source:Computer Science and Information Technology  Volume  12  Number  1  

Mohammad Ghanem   and Yousef Abuzir   

Cardio-Vascular Diseases (CVD) are found to be a significant factor in the populace leading to fatal death. This study aimed to create a smart system to predict heart disease early on. We used advanced computer techniques to analyze patient data and identify patterns linked to heart problems. By carefully selecting the most important information and using a method called Generative Adversarial Network (GAN)s to generate additional realistic patient data, we improved the accuracy of our predictions. The study uses 890 records gathered from Al Razi hospital in city of Jenin. Because the sample size was insufficient to get accurate prediction, it was increased by using the Generative Adversarial Network (GAN) Algorithm. The performance of the proposed ML model was estimated using numerous ML algorithm. K-Nearest Neighbors (KNN), Random Forest, AdaBoost, and Support Vector Machines (SVM) were used in the model. These methods helped us better understand the data and make more accurate predictions. The results obtained using this approach achieve a 99% accuracy rate by using KNN and SVM models and utilizing GAN-generated samples and feature selection strategies. These findings show that the approach that combines feature selection and machine learning algorithms is useful for the early and accurate prediction of heart disease.

]]>
Oct 2024
<![CDATA[Towards the Advanced Technology of Smart, Secure and Mobile Stadiums: A Perspective of Fifa World Cup Qatar 2022]]> Source:Computer Science and Information Technology  Volume  11  Number  2  

Samuel W Lusweti   and Jairus Odawa   

Innovations in recent years have made technological advancement become a cutting-edge revolution for this modern society. Innovations that are centered on Information Technology, have resulted in disruptive technological revolutions around the world. Smart technology has been one of the game-changing innovations that have highly impacted technological advancements. The Internet of Things being one of the blueprints of smart technologies, has been applied in various multidisciplinary fields including space exploration, agriculture, and soccer stadiums among other areas. In this paper, we present the impact of smart technology in the enhancement of fan experience in football pitches with bias in the FIFA World Cup of 2022. Some technologies adopted during the recently ended FIFA World Cup competitions held in Qatar in the year 2022, were completely new in the history of football. The use of sensor-embedded smart balls, rogue drone hunters, aids for visually impaired fans, air-conditioned stadiums, and automated crowd control systems were all deployed for the very first time since the official World Cup competitions started 92 years ago. The research found that with these new technologies in place, game management by FIFA officials has highly improved and fans irrespective of their physical challenges had more to celebrate and enjoy than before. Lastly, the paper presents an overview of the first ever economical and mobile stadium used and demounted after the World Cup competitions.

]]>
Oct 2023
<![CDATA[Unlocking the Potential of Soft Robotics with Blender and Unreal Engine: A Powerful Combination for Adaptive Morph Design]]> Source:Computer Science and Information Technology  Volume  11  Number  1  

Md. Imtiaz Hossain Subree   

Soft robotics is a relatively new discipline concerned with the design, production, and control of soft, flexible, and adaptable robotic systems. The capacity to morph, or alter the shape in response to external inputs, is a critical element of soft robotics. This feature allows the robot to adapt to changing situations and jobs, making it more adaptable and helpful in a wide range of applications. This research has described a unique technique for designing adaptable morphs for soft robotics utilizing a solid combination of Blender and Unreal Engine. The suggested method uses both the real-time simulation features of Unreal Engine and the modeling and animation features of Blender to test and improve the morphs in a virtual world. The virtual environment simulates the real-world context in which the soft robotic system will function. The simulation's output is then exported and utilized to drive a real-world soft robotic system. Experiments are used to validate the suggested technique, and the findings are evaluated and debated. The technique has been proven helpful in designing and developing adaptive morphs for soft robotics, and it contributes to realizing soft robotics' full potential.

]]>
Apr 2023
<![CDATA[Tools for Implementing the Models and Algorithms for Processing Multimodal Data]]> Source:Computer Science and Information Technology  Volume  11  Number  1  

Nataliya Boyko   

The article discusses technologies and technical means that provide a human-oriented approach to software development, a better adaptation of human-machine interfaces to user needs, improved ergonomics of software products, etc. Measures are analyzed that contribute to the formation of fundamentally new opportunities for presenting and processing information about real-world objects with which a person interacts in production, educational and everyday activities in computer systems. The article is relevant in identifying modern models and algorithms for processing multimodal data in computer systems based on a survey of company employees and analysis of these models and algorithms to determine the advantages of using models and algorithms for processing multimodal data. Research methods are considered: comparative analysis; systematization; generalization; poll. An overview of multimodal data presentation models (muxel model, spatio-temporal connected model and multi-level ontological model) is offered, which allows for presentation of a digital double of the object under study. The means of combining to obtain the most informative way of describing a physical twin are considered. The results of the survey are studied, which noted: "general judgment about the experience of using models and algorithms of multimodal data processing", where respondents indicated in the item" Personally, I would say that models and algorithms of multimodal data processing are practical" with an average value of 8.16 (SD = 0 1.70); in the item "I would say that the models and algorithms of multimodal data processing are clear (not confusing)" with an average value of 7.52. Respondents (more than 5.0 points) were determined to positively evaluate models and algorithms for processing multimodal data in work environments as practical, understandable, manageable and original.

]]>
Apr 2023
<![CDATA[Secure Change Management Process: On the Effectiveness of DevSecOps]]> Source:Computer Science and Information Technology  Volume  10  Number  4  

Zalialetdzinau Kanstantsin   

DevSecOps is a modern organizational and management system of production and development processes in the field of software and digital production, which has significant potential for implementation due to the possibility of obtaining safe, stable software products with reduced resource and time costs, which has been formed as a result of the sector. Its architecture is modernized by deep integration of safe approaches at each stage of formation of the digital software body of developed digital products. The object of research is the system of modern digital software development. The subject of the research is the integration of Security-blocks (as a focus toolkit of the organizational and management system DevSecOps) into digital fabrication with an assessment of the effectiveness of the implementation of the concept of "secure software products". The main purpose of the research is to determine the vector of potential development of modernized software-digital production aimed at minimizing digital code body vulnerabilities of developed software products and optimizing resource and timing costs. In the framework of the present study, to achieve the set goal, methods of scientific search, analytics and determination of correlative convergence, by identifying empirical patterns obtained in the analysis of results of bibliometric search of leading scientometric databases regarding organizational and managerial schemes and development processes of software-digital development and manufacturing are applied. Based on the results of the research in a given horizon of scientific and empirical data, a probabilistic vector of software-digital production and manufacturing development in the field of deep architectural solutions to reduce the vulnerabilities of the digital code body will be obtained. The practical significance of the study is determined by the possibility of identifying the appropriate horizons of scientific search and the vector of digital development and production with a focus on the integration of security control solutions.

]]>
Dec 2022
<![CDATA[Covid-19 Pandemic as an Accelerator toward Attainment of ICT Policy-Kenya Vision 2030]]> Source:Computer Science and Information Technology  Volume  10  Number  3  

Samuel W Lusweti   and Collins O Odoyo   

Covid-19 pandemic struck the world at the time nobody was really prepared for such a deadly disease. At the onset, everybody was uncertain of what needed to be done in order to stop the highly spreading virus. The World Health Organization (WHO) advised countries to implement containment measures to curb the spread of the disease. Some of these measures included the restriction to movement of people and asking people to work from home. Working from home became the new norm especially in Sub-Saharan Africa. The only way people would work from home was through the use of ICT infrastructure as an enabling technology. In this paper, we evaluated the role of ICT in Kenya and how it has been adopted before and during the period of the pandemic. It was found that before corona virus disease, the ICT policy that was promulgated by the government in 2006 and revised in 2019 faced a lot of challenges during its implementation. These challenges included inadequate E-learning resources, inadequate finances and slow e-commerce uptake. During Covid-19, it was found that some of the objectives of the ICT-Policy were more implemented than before. For instance, there was an enhancement in ICT innovation through local manufacture of ventilators, an improvement in TV broadcasting, enhanced e-commerce adoption, and the availability of affordable internet services. It was consequently found that Kenya is ready to adopt the policy without any challenges provided that all stake-holders play their role as mandated by the ICT policy of 2019.

]]>
Oct 2022
<![CDATA[Investigation on Perception of the Utilization of ICT Tools for Instructional Material Delivery in Schools: Case Study of Selected Tanzanian Secondary Schools]]> Source:Computer Science and Information Technology  Volume  10  Number  3  

Catherine F. Mangare   Deus F. Kandamali   Kelvin J. Mushi   Honesta Ndumbaro   and Evance Muhuwa   

The purpose of this study is to investigate the perception of the utilization of Information and Communication Technology (ICT) tools for instructional material delivery in secondary schools in the Eastern South Tanzania regions. The study explores perceptions regarding secondary school students' utilization of ICT. The target population was extracted from two districts in five regions. Both probability and non-probability sampling procedures were used to select the sample. Headteachers were carefully chosen, while teachers were selected through stratified and simple random sampling. A total of 300 respondents from six secondary schools in Tanzania participated in this study. Primary data were collected using a questionnaire. With the help of statistical software, inferential and descriptive analyses were done. The findings of the study revealed that most of the respondents have positive attitudes toward utilizing ICT in delivering secondary school. Strategies on how to improve the utilization of ICT in the school have been suggested. The greatest result of utilizing ICT in the secondary education system will finally be the economic appliance of Tanzania by preparing students for the innovative technological knowledge-based economy. Recommendations on issues relating to ICT infrastructure and use have been given.

]]>
Oct 2022
<![CDATA[Fluorescence Emission Wavelength QSPR Application with Linear Blending Method in Machine Learning Algorithms]]> Source:Computer Science and Information Technology  Volume  10  Number  2  

Nina Bryan   Dewayne A Dixon   Seonguk Kim   and Yeona Kang   

Machine learning tools have been developed to analyze quantitative structure-activity/property relationship (QSAR/QSPR) modeling research. Better feature selection algorithms in the ensemble methods have been used to advance QSPR/QSAR modeling, helping to understand the relation between features and target variables and reducing the computational requirements. Implementing feature importance allows for a more effective and clearer view into features' relative importance and interpret the predictions. However, the main struggle of ensemble learning methods is that each model leads to different feature selections for interpretation. Therefore, it is necessary to summarize each model and its corresponding features for better performance, resulting in high prediction accuracy. In this article, we use a blending method for prediction and interpretability in terms of the experimental values of fluorescence wavelengths. The blender requires two levels. The first level uses multiple classifiers: Random Forest, ExtraTrees, Adaptive Boosting, and Gradient Boosting. The second level requires a linear blending method that summarizes information from the classifiers. Even though the ensemble learning models accurately predict properties and activities, the algorithms are often susceptible so that even small changes can drastically impact their efficiency and accuracy. Thus, the main idea to overcome the difficulty is to implement multiple times feature selections in each model to manipulate the sensitivity. Furthermore, it accurately predicts the fluorescence data set from a regression task of the Decision Tree based (DT-based) QSAR/QSPR model. This paper provides the best-optimized features when considering specific experimental chemical or biological values. Furthermore, the tables and figures representing each model's feature selections and accuracy demonstrate the result. It shows that even though the number of features for predicting the Fluorescence Emission Wavelength reduces, the accuracy of training and test sets is maintained, and the computational effectiveness is increased.

]]>
Jul 2022
<![CDATA[Algorithms and Tools for Securing and Protecting Academic Data in the Democratic Republic of Congo]]> Source:Computer Science and Information Technology  Volume  10  Number  2  

Mugaruka Buduge Gulain   Jérémie Ndikumagenge   and Buhendwa Nyenyezi Justin   

This paper deals with the implementation of Algorithms and tools for the security of academic data protection in the Democratic Republic of Congo. It consists principally in implementing two algorithms and two distinct tools to secure data and in this particular case, academic data of higher and university education in the Democratic Republic of Congo. The design of algorithms meets the approach that any researcher in data encryption must use during the development of a computer system. Briefly, these algorithms are steps to follow to encrypt information in any programming language. These algorithms are based on symmetric and asymmetric encryptions, the first one uses Christopher Hill's algorithm, which uses texts in the form of matrices before they are encrypted and RSA as one of the asymmetric algorithms, which uses the prime numbers that we have encoded on more than 512 bits. As for tools, we have developed them in php which is only a programming language taken as an example because it is impossible to use all of them. The tools implemented are based on the algorithms of Caesar, Christopher Hill and RSA showing how the encryption operations are carried out thanks to graphical interfaces. They are only tools for pedagogical reasons to help students and other researchers learn how to use developed algorithms. We have not developed them for pleasure but rather used them in any information system, which would prevent and limit unauthorized access to computer systems. They will not be used only for the management of academic fees but for any other information system, which explains and shows the complexity of the tools developed. We have not been able to solve the problems of versions for the developed prototype, because if there is a new version later, some functions may be obsolete, which would constitute the limitation of these tools. This work targets primarily the Ministry of Higher Education and Universities, which will make these results their own and implement them in order to solve the problem of intrusions, unauthorized access to developers and researchers who will use tools already made instead of thinking about their development. We are trying to demonstrate the steps and the methodology that allowed us to reach our results, in the following lines.

]]>
Jul 2022
<![CDATA[Development of the Established Redlich-Kister Finite Difference Solution with MKSOR Iteration for Solving One Dimensional Diffusion Problems]]> Source:Computer Science and Information Technology  Volume  10  Number  1  

Jumat Sulaiman   and Mohd Norfadli Suardi   

This paper contributes a new method known as the second-order Redlich-Kister Finite Difference (RKFD) solution to the partial differential equation, especially for one-dimensional (1D) diffusion problems. All derivative terms for the proposed method are needed for the discretization process of the second-order RKFD. Arranging the derivative terms will lead to the second-order RKFD approximation equations. Then, this approximation equation is applied to solve the system of the RKFD equation. As the large-scale and sparse coefficient matrix is obtained, it will be solved iteratively to regulate the high computational complexity by using the Gauss-Seidel (GS), Kaudd Successive Over Relaxation (KSOR) and Modified Kaudd Successive Over Relaxation (MKSOR) iterative methods. All of those iterative methods are developed according to the matrix structure of the system and are applied to three examples of the proposed problem. As a result, MKSOR iterative method showed significant improvement in terms of performance efficiency by contrast to GS and KSOR iterative methods. The performance efficiency is measured by the number of iterations, execution time and maximum norm.

]]>
Apr 2022
<![CDATA[IMPILO Platform - An Innovative Blockchain-Based Global Open Healthcare Social Network]]> Source:Computer Science and Information Technology  Volume  9  Number  3  

Thelma Androutsou   Michael Kontoulis   Konstantinos Bromis   Panagiotis Kapsalis   Ioannis Kouris   Haralampos Karanikas   Alexandros Christodoulakis   Panagiotis Dimitrakopoulos   Dimitris Askounis   and Dimitrios Koutsouris   

The current research project focuses on the development of an innovative social health network and electronic health record system (IMPILO platform) which is based on Blockchain technology. The purpose of this project is to provide a series of health services (e.g. booking of medical appointments and tele-counselling meetings) to both citizens and doctors that would offer a more direct communication and cooperation between them and thus help reduce critical delays in medical care provision. As IMPILO manages health data, in addition to the group of friends found in some forms on all social networks, two additional certain types of user relationships have been defined: Circle of Trust (which includes a limited number of people with full rights to review and modify each user's profile and health record) and Care Team (which consists of the users-doctors that have access to their profile and medical data and can perform certain actions on their behalf). This feature is useful for keeping profiles of elderly and minors. The utilization of the capabilities of the blockchain technology in areas beyond cryptocurrencies is of particular value, especially in the field of health, where the IMPILO platform creates more reliable and durable networks, using the blockchain technology, while helping to the seamless communication between the various stakeholders (doctors, patients, hospitals, etc.). The decentralized data storage and full control of a patient's consent management process helps overcome issues such as the secure management of sensitive personal data and authorization to access medical data. Thus, blockchain is one of the most important components of the IMPILO platform that acts as a basis for recording all meaningful events in the application and provides security to users for the integrity of their data. The application is accessible via a smartphone application (for both Android and iOS operating systems) and a web interface, with a special emphasis on the design of functional, efficient and easy-to-use user interfaces, taking into account the requirements of the end users of the application and the specifics of the health sector.

]]>
Oct 2021
<![CDATA[Evaluating State Universities Websites Visibility in One Philippine Region Using Search Engine Optimization Tools]]> Source:Computer Science and Information Technology  Volume  9  Number  2  

Marco Jr. N. Del Rosario   

This study analyses the visibility of the State Universities' website in Region 4A, Philippines during the pandemic wherein universities halted most of the physically performed transactions and started the delivery of services online through their websites. An attempt has been taken to help web content creators and developers to make their websites visible and easier to find in search engines. The websites' visibility was measured by collecting data through Search Engine Optimization tools, specifically Alexa, CheckPageRank, SEO Analyzer, and Moz Link Explorer. The collected data are the websites' global and national traffic rank, domain authority, PageRank, loading time and speed, daily pageviews, bounce rate, daily time on site, and site linking in. Websites were ranked based on how good or bad their SEO tool results. The analyzed result suggests that UPLB performed the most optimization since it ranks first in most of the tools. It was followed by BatSU and CvSU websites which consistently placed second and third in the rank list, respectively. The LSPU website occupies fourth place in the rank list, the SLSU website in fifth place, and the URS website in the last place. This study suggests that the least performing websites need to establish more high-quality website links and to create more user-needed contents to avoid an increase in bounce rate.

]]>
Aug 2021
<![CDATA[A Case Study of Innovation in the Implementation of a DSS System for Intelligent Insurance Hub Services]]> Source:Computer Science and Information Technology  Volume  9  Number  1  

A. Massaro   A. Panarese   M. Gargaro   A. Colonna   and A. Galiano   

This paper presents a case study of the Project ‘DSS INSURANCE HUB'. Specifically, research activities are carried out in the context of digital transformation in the insurance service sector. In the first part of the paper, a core of Key Performance Indicators (KPIs) of insurance service performance is identified, mainly tracking agents' activities, starting to the Plan, Do, Check, Act (PDCA) process mapping of the insurance activities about claims. Then the study focused on the implementation of a Long Short Term Memory (LSTM) artificial neural network predicting the value of agent-related KPIs. In particular, the neural network is tested on the prediction of the KPI called SP defined by the ratio between the cost of the claims and the insurance premium collected. In order to validate the LSTM model, further artificial records (AR) are added for the training dataset construction, by generating 2.800 records of variables. The LSTM-AR increases of 25% the LSTM performance. The adopted approach is typical for real cases of study where often no much data is available. The LSTM model, created for the SP prediction, is suitable to calculate the value of other KPIs. The formulated KPI dashboards are implemented in a Decisional Support System (DSS) platform providing the agent activity and company information, and opportunities to improve the business processes.

]]>
Jun 2021
<![CDATA[Enhanced Image Segmentation: Merging Fuzzy K-Means and Fuzzy C-Means Clustering Algorithms for Medical Applications]]> Source:Computer Science and Information Technology  Volume  9  Number  1  

Karim Mohammed Aljebory   Thabit Sultan Mohammed   and Mohammed U. Zainal   

Separating an image into regions according to some criterion is called image segmentation. This paper presents an algorithm that combines the fuzzy k-means (FKM) and fuzzy c-means (FCM) clustering strategies. The proposed algorithm combines the FKM and the FCM algorithms mathematical features, which is referred to as (CFKCM). The FKM and FCM clustering algorithms are adopted to compare the performance and hence evaluate the proposed clustering algorithm. Tests are conducted, and performance parameters are calculated for validation. The comparison and assessment analysis are based on metrics related to the image clustering process, such as the Segmentation Accuracy (SA), Clustering Fitness (CF), and cluster Validity function (Vpc and Vpe). A dataset of MR images is used by this research for the application, test, and evaluation of the image clustering. The results for clustering backbone MRI images show that CFKCM algorithm is featured by being more effective, and comparatively independent of the noise, where it can process the "clean" images and noisy images without knowing the type of the noise, which is the most difficult task in image segmentation.

]]>
Jun 2021
<![CDATA[An Enhanced Scalable Design Approach for Managing Large Scale Variability in Software Product Lines (SPLs)]]> Source:Computer Science and Information Technology  Volume  8  Number  4  

Muhammad Garba   Muhammad Nura Malami   Muhammad Muazu   Abubakar Ahmad Aliero   Dalhatu Mohammed   and Bashar Umar Kangiwa   

Variability management remained the main challenge for software product line (SPL) adoption since it needs to be efficiently managed at different levels of the SPL development process (for example, requirements analysis, software design, implementation etc.). With the increase in size and complexity of product lines, and the more holistic systems approach to the design process, managing ever-growing variability models has become a challenge and more difficult to handle. Accordingly, tool support for variability management has been gathering increasing momentum over the last two decades and can be considered a key success factor for developing and maintaining SPLs. This work presents a new tool support that exhibits a number of features that enable it to deal with large models. The new tool adopts the Separation of Concerns design principle by providing multiple perspectives to the model, each conveying a different set of information. It can comprise more than 1000 features particularly, showing the browser (structural) View, which is displayed using a mind-mapping visualisation technique (Hyperbolic trees); the development/Edit view where a new feature can be created either based on existing feature or from scratch; the business view where the information related to the project management, cost/benefit analysis, close/open sets of features and others are presented and the dependency view which is displayed graphically using logic gates.

]]>
Oct 2020
<![CDATA[Predicting Road Accident Risk in the City of San Pablo, Laguna: A Predictive Model Using Time Series Forecasting Analysis with Multiplicative Model]]> Source:Computer Science and Information Technology  Volume  8  Number  4  

Reymar V. Manaloto   and Ronnel A. Dela Cruz   

This research used the mechanisms in time series analysis such as secular trend, irregular fluctuation, cyclical and seasonal patterns to conclude forecast using the multiplicative model. In the conduct of this study, time, day and location are the main focuses in predicting road accident patterns. By analyzing road accident patterns based on time series forecasting using a multiplicative model, this research will provide suggestions for the government to take effective measures to reduce accident impacts and improve traffic safety. This research analyzed the road accident data in San Pablo City, Laguna Philippines from the year 2014 – 2016 and forecast the possible prevalence of road accidents and its pattern. A total of 1229 road accidents were included in this study. As recorded based on the cumulative frequency, Barangay San Francisco has the greatest number of road accidents with an average of 101 cases per year. It can be attributed to the fact that the road is considered the busiest because it is the only gateway of provincial travelers from Metro Manila to Southern Provinces. While for the monthly pattern prediction, April is the most risky in road accidents with possible 33 cases in a year with 13.08% mean absolute error or 86.92% accuracy, probably because most of the community is in summer vacation. And in terms of the daily pattern, Sunday is the crucial day in terms of road accidents with 44 possible cases with 10.29% mean absolute error or 89.71% accuracy, and the majority of the possible road accident arises between 6:00 pm to 9:00 pm with 45 possible cases with 13.21% mean absolute error or 86.79% accuracy.

]]>
Oct 2020
<![CDATA[Computer-based Adaptive Test Development Using Fuzzy Item Response Theory to Estimate Student Ability]]> Source:Computer Science and Information Technology  Volume  8  Number  3  

Fitri Wulandari   Samsul Hadi   and Haryanto   

The field of computing has developed so rapidly. Various theories of computational evolution to support human needs are continually being pursued; one of them is the field of education, especially in terms of teaching, testing, and evaluation of exam results. This study aims to develop computerized adaptive tests (CAT) to measure the student's abilities. Students will be measured for their cognitive abilities in Mathematics and Science subjects. It starts with developing a question bank that has been tested with 720 students to classify items based on its characteristic, i.e., easy, medium, and challenging. This research uses the item response theory approach with the model 2 logic parameters (2PL), namely item difficulty and item difference power. The selection of test items for each participant will depend on the response of the previous answer. Fuzzy algorithm is used in analyzing test items through four stages, namely fuzzification, implications, inference, and defuzzification. Meanwhile, to measure the ability of test-takers, the maximum likelihood estimation method, MLE, is used. Based on the testing of 73 students, it was found that each student received a different test item, both in the number of questions and the level of difficulty of the questions, according to student's abilities. The results of the CAT program's measurement of the test taker's ability estimation were stated to be more effective compared to conventional methods, as indicated by the average test length of 15 items compared to traditional tests, which had a length of 50 items. Therefore, the CAT program with the fuzzy item response theory can be used as support to measure students' abilities.

]]>
Aug 2020
<![CDATA[Transforming Service Delivery in Government through Integration of Web 3.0 and Social Media Technologies: A Case for Developing Countries]]> Source:Computer Science and Information Technology  Volume  8  Number  3  

Josphat Karani Mwai   Simon Maina Karume   and John W Makokha   

This study focuses on how to integrate Social Media and Web 3.0 features within government portals for personalised service delivery within government portals. The study was based on the Kenya government national portal (eCitizen) and sought the views of 94 experts responsible for managing social media and portal service on the eCitizen portal. The study employed Factor Analysis to analyse the experts' responses. Through factor analysis, the study identified six core factors that make up a Personalised Integrated Service Delivery (PISD) framework. These factors were services, web 3.0 features, social media management, security, one-stop-shop for government services and external factors. The study also described a Personalised Integrated System Architecture based on the PISD framework. The PISD framework and proposed system architecture can be used to guide the integration of social media and web 3.0 features within government portals for Personalised Integrated Service Delivery to citizens especially for developing countries like Kenya.

]]>
Aug 2020
<![CDATA[Ethnography, Its Strengths, Weaknesses and Its Application in Information Technology and Communication as a Research Design]]> Source:Computer Science and Information Technology  Volume  8  Number  2  

Amuomo Nixon   and Collins Otieno Odoyo   

Ethnography was originally developed for the study of foreign cultures by the anthropologists. It involves the observation of situations and carrying out interviews with the study population. There are two basic characteristics of ethnography where the observation takes place in a natural setting and secondly, where researchers must understand how an event is perceived and interpreted by the people in a community. Ethnography is therefore a qualitative research method that is used to study people and cultures for in-depth knowledge about a socio-technological realities surrounding everyday software development practice. Ethnography can help to uncover not only what practitioners do, but also why they do it in terms of human computer interaction and user interfaces design. This is due to its unique strength to involve the researcher, the research process and the research, making it a potential ideal method for undertaking research where the community and its members interact with each other. The main objective of this paper is to examine through literature review, the strengths and weaknesses of ethnography as a research design method for researchers in the information communications and technology (ICT) field. This will therefore provide more insight on how ethnography can be applied in conducting some of the qualitative information communication and technologies studies, especially where in-depth understanding is required.

]]>
May 2020
<![CDATA[A Simple Analytic Approximation of Luminosity Distance in FLRW Cosmology using Daftardar-Jafari Method]]> Source:Computer Science and Information Technology  Volume  8  Number  2  

V. K. Shchigolev   

In this paper, the iterative method suggested by Daftardar-Gejji and Jafari hereafter called Daftardar-Jafari method (DJM) is applied for the approximate analytical representation of the luminosity distance in a homogenous Friedmann-Lemaître-Robertson-Walker (FLRW) cosmology. We obtain the analytical expressions of the luminosity distance using the approximate solutions of the differential equation to which the luminosity distance satisfies, subject to the corresponding initial conditions. With the help of this approximate solution, a simple analytic formula for the luminosity distance as a function of redshift is obtained and compared with a numerical solution for the general integral formula by the Maple software. Subsequent comparison of the obtained approximate analytical formula with the corresponding numerical solution for the and quintessential models is provided and showed a high accuracy of the DJM approximations, at least for the certain values of parameters of the models. This comparison demonstrates the efficiency and simplicity of this approach to the problem of calculating the luminosity distance in theoretical cosmology.

]]>
May 2020
<![CDATA[Phenomenology Approach Applicability in Information Systems and Technology Field]]> Source:Computer Science and Information Technology  Volume  8  Number  2  

Mary Walowe Mwadulo   and Collins Otieno Odoyo   

Phenomenology is a research approach that is uniquely positioned to help scholars learn from the experience of others. Even though it is a powerful research approach, the nature of phenomenology is qualitatively requiring a more subjective, interpretive stance which makes it incompatible with ICT research which is based on logical, objective and quantifiable procedures. This incompatibility has made ICT scholars shy away from applying it in their research work. However, phenomenology can still be used to explore challenging problems in the field of ICT, by understanding its nature of studies and working to ensure proper alignment between the specific research question and researchers' underlying philosophy. An important finding from the different studies that have applied phenomenology is that empiricism and phenomenology do not oppose each other, rather while empiricism helps understand what happens in ICT use, phenomenology helps to uncover the root cause behind the empirical observation. Therefore, it is important to know areas in ICT research where phenomenology is most suitable. This article examines the applicability of phenomenology approach in the field of ICT by defining phenomenology, explaining the nature of studies that apply it, highlighting the benefits of applying phenomenology approach in the field of ICT, Discussing suitable areas in the field of ICT where phenomenology approach is applied and Revealing areas where phenomenology approach has been applied in the field of ICT.

]]>
May 2020
<![CDATA[Use of Access Control List Application for Bandwidth Management among Selected Public Higher Education Institutions in Ethiopia]]> Source:Computer Science and Information Technology  Volume  8  Number  1  

Gedefaw Tilaye   and Lawrence Abraham Gojeh   

A cross-sectional survey on use of access control list application for bandwidth management among selected public higher education institutions in Ethiopia was conducted. The objective was to help academic institutions achieve sustainable quality of network service and bandwidth management. 100 information and communication technology staff of 3 directorates of universities and located at Haramaya, Dire Dawa and Odabutum were sampled, using purposive and simple random sampling techniques. Structured questionnaire, interview and observation checklist used for data collection and analyzed to answer the research question. Results revealed a set of procedures performed by hardware, software and administrators, to monitor access, identify users requesting access, record access attempts, and grant or deny access to resources on the internet was used as access control list application. Also, revealed was qualitative data analysis that allowed students to bring their own devices (no matter the number) to connect to the campus-wide network. The implication of results in the three universities was that they were not taking bandwidth management seriously, which was evidenced by the absence of access control list application. It concludes that access control list application for bandwidth management is a necessity and Graphical Network Simulator software version 3 was recommended as appropriate for implementation.

]]>
Jan 2020
<![CDATA[Analysis and Recurrent Calculation of 8th Rank MBF of Maximal Types]]> Source:Computer Science and Information Technology  Volume  8  Number  1  

Tkachenco V. G.   and Sinyavsky O. V.   

This paper is a continuation of the study of monotone Boolean functions (MBFs) of maximal types using MBF partitioning into schemes. When factor out any of the variables out of the brackets, two MBFs are formed: left (in brackets) and right. It proved the possibility of such that take the one of the variables out of the brackets such that any conjunctive clause in the left MBF consists of fewer variables than any conjunctive clause in the right MBF. In addition, the left MBF absorbs the right MBF. For the first time, an important class of MBF of rank 8 - MBF of maximal types was studied and analyzed. The number of MBFs of maximal types of rank 8 and the number of isomorphic classes of such MBFs obtained from pairs of MBFs 7 rank are calculated. An example of the recursive construction MBF 8 rank is shown. Tables and schemes for MBF 8th rank are given. The dependences found between the maximal types MBF nth rank and rank n–1 make it possible to reduce the enumeration of MBFs by constructing rank 8 equivalence classes from rank 7 equivalence classes. The proposed methods are convenient for the analysis of large MBF ranks.

]]>
Jan 2020
<![CDATA[Decision Support System for Multistore Online Sales Based on Priority Rules and Data Mining]]> Source:Computer Science and Information Technology  Volume  8  Number  1  

Alessandro Massaro   Antonio Mustich   and Angelo Galiano   

The work is focused on the design and implantation of an intelligent multi store E-commerce platform able to manage the orders and the warehouse stock by mean of priority association rules and data mining algorithms. The proposed Decision Support System (DSS) is structured into two main levels: the first one is related to priority rule definition due to the online product requests according with the availability check in different warehouses of stores, and the second one provides important information about sales prediction thus facilitating stock management and consecutively logistic. Specifically, the prototype platform is able to manage the warehouse products of different stores by means a simultaneous comparison of products available in the different stores linked to the platform, and by means of a scalable end-to-end tree boosting system XGBoost algorithm able to predict online sales. The paper has been developed within the framework of an industry project.

]]>
Jan 2020
<![CDATA[Impact of Multimedia Instruction in Biology on Senior High School Students' Achievement]]> Source:Computer Science and Information Technology  Volume  7  Number  5  

Allan Ayittey   Emmanuel Arthur-Nyarko   and Francis Onuman   

This study investigated the comparative effectiveness of teaching through the use of interactive multimedia and conventional teaching method in biology on senior high school students concerning students' achievement. The pretest-posttest non-equivalent quasi-experimental design was used for this study. One hundred and ten (110) form three (Form 3) General Science students who had Biology as an Elective subject were selected for the study. They were grouped and labeled control and experimental groups. Students in the experimental group were taught through the use of interactive multimedia whereas the control group was taught through the traditional teaching approach. The study found that both methods were quite effective for teaching Photosynthesis in Biology. However, out of the two methods, the multimedia approach was found more suitable for teaching abstract topics. The study also reported no statistically significant differences in the students' academic performance by gender. These findings suggest that the academic achievements of students in Biology can be improved with multimedia instruction. The study recommends that the computer should be used to complement the teachers' teaching but should not take over the teaching process. A similar study could be carried out in a similar environment but should include more than one topic as this study used only one topic.

]]>
Nov 2019
<![CDATA[Mobile Agent Based Distributed Network Architecture with Map Reduce Programming Model]]> Source:Computer Science and Information Technology  Volume  7  Number  5  

Benard O. Osero   Elisha Abade   and Stephen Mburu   

In the recent years, the demand for data processing has been on the rise prompting researchers to investigate new ways of managing data. Our research delves into the emerging trends of data management methods, one of which is the agent based techniques and the active disk technology and also the use of Map-reduce functions in unstructured data management. Motivated by this new trend, our architecture employs mobile agents technology to develop an open source framework called SPADE to implement a simulation platform called SABSA. The architecture in this research compares the performance of four network storage architectures: Store and forward processes(SAF), Object Storage Devices(OSD), Mobile agent with a Domain Controller (DMC) enhanced with map-reduce function and Mobile agent with a Domain Controller and child DMC enhanced with Map-reduce (ABMR): both handling sorted and unsorted metadata. In order to accurately establish the performance improvements in the new hybrid agent based models and map-reduce functions, an analytic simulation model on which experiments based on the identified storage architectures were performed was developed and then analytical data and graphs were generated. The results indicated that all the agents based storage architectures minimize latencies by up to 45 % and reduce access time by up to 21% compared to SAF and OSD.

]]>
Nov 2019
<![CDATA[Deep CNN with Residual Connections and Range Normalization for Clinical Text Classification]]> Source:Computer Science and Information Technology  Volume  7  Number  4  

Jonah. K. Kenei   Juliet. C. Moso   Elisha T. Opiyo Omullo   and Robert Oboko   

Deep learning has achieved remarkable performance in many classification tasks such as image processing and computer vision. Due to its impressive performance, deep learning techniques have found their way into natural language processing tasks as well. Deep learning methods are based on neural network architectures such as CNN (Convolutional Neural Networks) with many layers. Deep learning methods have shown state of-the-art performance on many classification tasks through several research works. It has shown great promise in many NLP (Natural language processing) tasks such as learning text representations. In this paper, we study the possibility of using deep learning methods and techniques in clinical documents classification. We review various deep learning-based techniques and their applications in classifying clinical documents. Further, we identify research challenges and describe our proposed convolutional neural network with residual connections and range normalization. Our proposed model automatically learns and classifies clinical sentences into multi-faceted clinical classes, which can help physicians to navigate patients' medical histories easily. Our propose technique uses sentence embedding and Convolutional Neural Network with residual connections and range normalization. To the best of our knowledge, this is the first time that sentence embedding and deep convolutional neural networks with residual connections and range normalization have been simultaneously applied to text processing. Lastly, this work follows a generalized conclusion on clinical documents classification and references.

]]>
Jul 2019
<![CDATA[Bi-objective Optimization Model for Integrated Preventive Maintenance and Flexible Job-shop Scheduling Problem]]> Source:Computer Science and Information Technology  Volume  7  Number  4  

Javad Rezaeian   and Farzaneh Mohammadpour   

This paper develops an integrated model for flexible job-shop scheduling problem with the maintenance activities. Reliability models are used to perform the maintenance activities. This model involves two objectives: minimization of the maximum completion time for flexible job-shop production part and minimization of system unavailability for the PM (preventive maintenance) part. To aim the objectives, two decisions must be taken at the same time: assigning n jobs on m machines in order to minimize the maximum completion time and finding the appropriate times to perform PM activities to minimize the system unavailability. These objectives are obtained with considering dependent machine setup times for operations and release times for jobs. In advance, the maintenance activity numbers and PM intervals are not fixed. Two multi objective optimization methods are compared to find the pareto-optimal front in the flexible job-shop problem case. Promising the obtained results, a benchmark with a large number of test instances is employed.

]]>
Jul 2019
<![CDATA[Study on Face Classification and Modeling Based on the Respirator-fit Problem for Chinese Adults]]> Source:Computer Science and Information Technology  Volume  7  Number  3  

Xiaotong Zhou   and Xiaoxia Song   

Introduction: In China, respirator is widely used to protect the public from air pollution. The design of respirator is based on anthropometric date obtained from groups of people in RFTPs (respirator fit test panels). Meanwhile the respirator-user fit is not satisfied as unsatisfactory seal exist. Methods: To solve the respirator-user fit problem in China, this study was divided into four parts: The public head-face measurement and analysis of head data clustering; reverse establishment of head model can be based on the clustering results; using the model, forward design of mask structure can be conducted. Results: Combined with Rotation component matrix counting and the relative index, 3 out of 7 representative facial indexes can be used as clustering variables. They are nose length, bitragion breadth and face height. The optimal number of clusters was 5 determined by Mix-F statistics. According to the methods of mathematical statistics, it shows that the main kind is short and narrow which is the fifth in this article. By using the plaster replica method based on the 3D print of the facial model, the chart pattern of optimized small-face mask was achieved.

]]>
May 2019
<![CDATA[SAS<sup>®</sup> Metadata Audit Reporting Transition to Modern BI & Analytics]]> Source:Computer Science and Information Technology  Volume  7  Number  3  

Pintu Kumar Ghosh   

In today's world, there has been huge transition from traditional BI (Business Intelligence) and Analytics to the modern one. In the past, we used to have a countable number of canned reports and analytical models; in modern BI (Self-Help reporting/In-memory/Tonnes of data) and Big Data Analytics space, the volumes have been growing very fast from a few Giga Bytes to Zeta Bytes of data. Starting from sourcing the data from disparate sources, housing them in various formats (structured to unstructured to text to streams) and finally landing them at Big Data/Data Lake platform for BI, Reporting, Visualization and Analytics use; IT change impact analysis and maintenance has been a challenge. That's when auditing and reporting metadata bring values to the problems. In every organization, Group IT requires a strategy to track data utilization or system adaptation in order to accelerate their decision-making process from time-to-time towards the growth and maturity. SAS® 9 has various auditing features and solutions like SAS® Audit, Performance and Measurement Package, SAS® Environment Manager Audit Reporting and SAS® VA (Visual Analytics) Administrator's Usage Reports. In this paper, an in-house and out-of-the-box users audit and metadata reporting approach has been depicted. It includes features like tracking number of SAS users and their usage of various client applications and the contents. It has also capability of capturing details like the various dormant SAS users/groups among the total registered SAS users/groups, an automatic housekeeping, scheduling and so on. If you are seeking for a solution for quick turn-around, it would help you to guide your organization whereby Enterprise BI & Analytics softwares like SAS® (EBI) Enterprise BI, SAS® EDI (Enterprise Data Integration) and/or SAS® Visual Analytics have already been deployed.

]]>
May 2019
<![CDATA[Analysis and Recurrent Computation of MBF of the Maximum Types]]> Source:Computer Science and Information Technology  Volume  7  Number  3  

Tkachenco V. G.   and Sinyavsky O. V.   

This manuscript is a continuation of the research of monotone Boolean functions (MBF), using the MBF partition into types. An interesting connection is observed between the intersection of the groups of MBF stabilizers of n-1 rank and the number of isomorphic functions of the nth rank. The number of MBFs of the nth rank obtained from the MBF pairs of n-1 rank is computed. The examples of recursive construction of the MBF of the nth rank are shown. The partitioning of the MBF of maximal types into classes is given. The number of classes of functions of the nth rank is computed. A new classification of monotone Boolean function of maximal types into schemes has been developed. Such schemes are given for 3-7 ranks of the MBF. The dependencies between the maximal types of MBF of the nth rank and the n-1 rank are found, which makes it possible to reduce the MBF enumeration by constructing the equivalence classes of the nth rank from the equivalence classes of n-1 rank. The proposed methods are convenient for analyzing large MBF ranks.

]]>
May 2019
<![CDATA[Feature Selection in Sparse Matrices]]> Source:Computer Science and Information Technology  Volume  7  Number  3  

Rahul Kumar   Vatsal Srivastava   and Manish Pathak   

Feature selection, as a pre-processing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. There are two main approaches for feature selection: wrapper methods, in which the features are selected using the supervised learning algorithm, and filter methods, in which the selection of features is independent of any learning algorithm. However, most of these techniques use feature scoring algorithms that make some basic assumptions about the distribution of the data like normality, balanced distribution of classes, non-sparsity or dense data-set, etc. The data generated in the real world rarely follow such strict criteria. In some cases such as digital advertising, the generated data matrix is actually very sparse and follows no distinct distribution. For this reason, we have come up with a new approach towards feature selection for cases where the data-sets do not follow the above-mentioned assumptions. Our methodology also presents an approach to solve the problem of skewness of data. The efficiency and effectiveness of our methods is then demonstrated by comparison with other well-known techniques of statistics like ANOVA, mutual information, KL divergence, Fisher score, Bayes' error, Chi-square, etc. The data-set used for validation is a real-world user-browsing history data-set used for ad-campaign targeting. It has very high dimensions and is highly sparse as well. Our approach reduces the number of features to a significant degree without compromising on the accuracy of the final predictions.

]]>
May 2019
<![CDATA[Research Project Model Canvas]]> Source:Computer Science and Information Technology  Volume  7  Number  3  

Hiago Silva   and Alexandre Cardoso   

This work presents a proposal of a visual tool to assist the creation of academic research projects, dissertations and theses. Its metrics are based on business and management success cases. In the creation and management of projects in teams are used visual strategies to present and record the parameters involved in the scope of the project through a screen, which can be composed of a frame with predefined fields or connection lines forming a flowchart. There are tools that can provide researchers with the conditions to view isolated parts of the project as bibliographic references only or correlation nodes between keywords, then it becomes necessary to create a strategy that enables the creator of the project and the team involved to visualize the essence of the project in the eminence of being created and to predict needs, failures, objectives as well as to restructure the project to adapt the research conditions. This strategy has the form of a framework, called Research Project Model Canvas with fields defined according to the needs of creating a research project, and its tables are organized in a logical order of reading, presentation and connection between each one.

]]>
May 2019
<![CDATA[Performance of Datamining Techniques in the Prediction of Chronic Kidney Disease]]> Source:Computer Science and Information Technology  Volume  7  Number  2  

Kehinde A. Otunaiya   and Garba Muhammad   

Data mining being an experimental science is very important especially in the health sector where we have large volumes of data. Since data mining is an experimental science, getting accurate predictions could be tasking. Getting maximum accuracy of each classifier is necessary. It is therefore important that the appropriate feature selection method should be selected. Feature selection is highly relevant in predictive analysis and should not be overlooked. It helps reduce the execution time and provide a more accurate and reliable result. Therefore, more researches on predictive analysis and how reliable these predictions are needs to be delved into. Application of data mining techniques in the health sector ensures that the right treatment is given to patients. This study was implemented using WEKA. This study is aimed at using 3 classifiers: multilayer perceptron, naive bayes and J48 decision tree in the prediction of chronic kidney disease dataset. The aim of this research is to evaluate the performance of the classifiers used based on the following metrics-accuracy, specificity, sensitivity, error rate and precision. Based on the performance metrics mentioned above, results shows that J48 decision tree gave the best result but naive bayes had the lowest execution time therefore making it the fastest classifier.

]]>
Mar 2019
<![CDATA[A Simple and Fast Line-Clipping Method as a Scratch Extension for Computer Graphics Education]]> Source:Computer Science and Information Technology  Volume  7  Number  2  

Dimitrios Matthes   and Vasileios Drakopoulos   

Line clipping is a fundamental topic in an introductory computer graphics course. An understanding of a line-clipping algorithm is reinforced by having students write actual code and see the results by choosing a user-friendly integrated development environment such as Scratch, a visual programming language especially useful for children. In this article a new computation method for 2D line clipping against a rectangular window is introduced as a Scratch extension in order to assist computer graphics education. The proposed method has been compared with Cohen-Sutherland, Liang-Barsky, Cyrus-Beck, Nicholl-Lee-Nicholl and Kodituwakku-Wijeweera-Chamikara methods, with respect to the number of operations performed and the computation time. The performance of the proposed method has been found to be better than all of the above-mentioned methods and it is found to be very fast, simple and can be implemented easily in any programming language or integrated development environment. The simplicity and elegance of the proposed method makes it suitable for implementation by the student or pupil in a lab exercise.

]]>
Mar 2019
<![CDATA[GDG in UNIX? No Way!]]> Source:Computer Science and Information Technology  Volume  7  Number  2  

Kannan Deivasigamani   

IBM mainframes in the z/OS environment provide a generational structure often referred to as Generation Data Group (GDG) for file storage to maintain data snapshots of related data.[1] These data resulting from business operations within a servicing organization are not uncommon. This structure can hold TEXT data sets without a problem. However, in the case of a UNIX or Linux platform, a comparable structure is unavailable for use by SAS for storing data as TEXT files. This paper contains a solution to this problem and shows a comparison of what the mainframe GDG offers and the solution offered. A developer or a programmer may find that the solution, TextGDS (SAS macro) is even better than the mainframe GDG structure in certain respects. Although there are both limitations and delimitations when using TextGDS, the tool helps to fill the void with UNIX-SAS.

]]>
Mar 2019
<![CDATA[Modification of the Norwegian Traffic Light States as the Method to Reduce the Travel Delay]]> Source:Computer Science and Information Technology  Volume  7  Number  1  

Setiyo Daru Cahyono   Sutomo   Seno Aji   Sudarno   Pradityo Utomo   and Tomi Tristono   

Traffic lights have a vital role as regulatory systems to control the vehicles flowed in urban networks. This research is based on the real case. The traffic lights are installed at a massive intersection of an urban network consisting of four sections. The systems control implements the modification of the Norwegian traffic lights states. The behavior of traffic lights states were modeled using Petri net method. For the model verification and validation, the invariants and simulation were applied. The purpose of the implementation of this control system was to reduce travel delays. The intersection performance level was good while the average travel delay on all sections was low. The method used for the testing was the comparison of the simulation results due to the settings that apply the standard system to the imitation of the reality of the system using modifications of the Norwegian traffic lights states. The control system was able to reduce the travel delays slightly. The average of Level of Service (LoS) index of the roads for all sections was at level D. It improved the performance of the intersection, but not yet significant. In addition to setting traffic lights, the presence of flyovers is urgent to improve travel delays.

]]>
Jan 2019
<![CDATA[Business Intelligence Improved by Data Mining Algorithms and Big Data Systems: An Overview of Different Tools Applied in Industrial Research]]> Source:Computer Science and Information Technology  Volume  7  Number  1  

Alessandro Massaro   Valeria Vitti   Angelo Galiano   and Alessandro Morelli   

The proposed paper shows different tools adopted in an industry project oriented on business intelligence (BI) improvement. The research outputs concern mainly data mining algorithms able to predict sales, logistic algorithms useful for the management of the products dislocation in the whole marketing network constituted by different stores, and web mining algorithms suitable for social trend analyses. For the predictive data mining and web mining algorithms have been applied Weka, Rapid Miner and KNIME tools, besides for the logistic ones have been adopted mainly Dijkstra's and Floyd-Warshall's algorithms. The proposed algorithms are suitable for an upgrade of the information infrastructure of an industry oriented on strategic marketing. All the facilities are enabled to transfer data into a Cassandra big data system behaving as a collector of massive data useful for BI. The goals of the BI outputs are the real time planning of the warehouse assortment and the formulation of strategic marketing actions. Finally is presented an innovative model oriented on E-commerce sales neural network forecasting based on multi-attribute processing. This model can process data of the other data mining outputs supporting logistic actions. This model proves how it is possible to embed many data mining algorithms into a unique prototypal information system connected to a big data, and how it can work on real business intelligence. The goal of the proposed paper is to show how different data mining tools can be adopted into a unique industry information system.

]]>
Jan 2019
<![CDATA[Agribusiness System Hub for Rural Agricultural Cooperative Societies in Developing Countries]]> Source:Computer Science and Information Technology  Volume  6  Number  4  

Kenedy Aliila Greyson   

This paper proposes the alternative method to be used in sharing agribusiness resources to the rural areas inhabitants of developing countries. The small scale enterprises in the rural areas of developing countries are in agribusiness category. Enterprises in those areas are categorized by the different products produced in a small scale throughout the year. Moreover, the market places are limited. It is recommended that, when agribusiness system hub are utilized effectively in cooperative unions of developing countries, it will strengthen the capacity to access markets, create jobs, enhance income, and hence reduce poverty in their communities. The papers consider the whole community as one Business Company and manage the business collectively using agribusiness system hub. Data are collected from the various selected communities are discussed and the model to address the problem is presented.

]]>
Dec 2018
<![CDATA[Developing a Privacy Compliance Scale for IoT Health Applications]]> Source:Computer Science and Information Technology  Volume  6  Number  4  

Kavenesh Thinakaran   Jaspaljeet Singh Dhillon   Saraswathy Shamini Gunasekaran   and Lim Fung Chen   

Internet of things (IoT) is intensely gaining popularity in the healthcare industry. Though these systems are ubiquitous, pervasive and seamless, an issue concerning consumers’ privacy remains debatable. There is an immaculate rise in terms of awareness amongst patients where data privacy is concerned. IoT-based health applications are more prone to privacy risks, and hence general privacy guidelines for health applications, in general, are not adequate. Often, privacy is oversighted, causing consumers to lose interest from using an application continuously. In this paper, we propose a compliance scale modelling the privacy principles for privacy-aware novel IoT-based health applications. We have conducted an analytical review of privacy guidelines to derive at the essential principles required to develop and measure privacy-aware IoT health applications after discarding irrelevant principles, extracting repeating core principles, and merging relevant principles. A quantitative survey was deployed to empirically evaluate the proposed scale to finalize the principles based on their significance. The proposed compliance scale presents essential privacy principles to adhere in the development of novel IoT health applications. The proposed compliance scale would be significant for policymakers and applications developers to measure, better understand and respect the privacy principles of consumers towards novel IoT-based health applications.

]]>
Dec 2018
<![CDATA[Maximum MBF Types]]> Source:Computer Science and Information Technology  Volume  6  Number  4  

Tkachenco V. G.   and Sinyavsky O. V.   

The classification of monotone Boolean functions (MBF) into types is given. The notion of the maximum type of MBF is introduced, and the shift-sum types are constructed. The matrices of type distribution by rank are constructed. Convenient algorithms for finding the number of maximum types and the maximum types themselves are presented. The proposed methods can be used to analyze large MBF ranks.

]]>
Dec 2018
<![CDATA[Modeling and Simulation of Low Frequency Subsurface Radar Imaging in Permafrost]]> Source:Computer Science and Information Technology  Volume  6  Number  3  

K. van den Doel   and G. Stove   

We describe simulated low frequency subsurface radar scans targeting the detection of a liquid water layer, or some other reflector such as conductive sulfides, under permafrost. A finite-difference time-domain (FDTD) and ray tracing simulation framework is used to model measurements and data analysis at depths from 350m to 800m. Operating characteristics such as pulse shape and noise levels of the measurement apparatus were obtained from an existing commercial radar scanning system. Results were used to test and optimize data analysis methods, predict maximum detection depth under realistic time constraints, and guide experimental design parameters such as the amount of replications required for denoising and length of the wide angle reflection and refraction (WARR) scan lines used for velocity estimation.

]]>
Sep 2018
<![CDATA[Co-opetition Strategy in Volunteer Computing: The Example of Collaboration Online in BOINC.RU Community]]> Source:Computer Science and Information Technology  Volume  6  Number  3  

Victor Tishchenko   

Volunteer computing (VC) is a strong way to harness distributed computing resources to perform large-scale scientific tasks. Its success directly depends on the number of participants, PC (some other devices) and time of their work. And while the computational aspect received much effective research attention and solutions; the social problem of VC phenomenon – how persuade the owners of widely dispersed smart machines and devices to participate in VC projects, and how figure out ways to speed and expand the capability of donated computer resources, remains largely unexplored. We think that a motive of self-actualization of participants of volunteer computing is not enough to explain why millions of unskilled people day by day participate in scientific computation. Our research indicates that the answer lies at the intersection of self-oriented motivation and the interactional, organizational possibilities emerging through the Internet. In this paper we investigate Russian community of VC and discuss the suggestion that online collaboration can capture people`s motivation better than only intrinsic motives.

]]>
Sep 2018
<![CDATA[Drunk Driving Detection]]> Source:Computer Science and Information Technology  Volume  6  Number  2  

Kavish Atul Sanghvi   

Drunk driving accidents have increased day by day and has become a big issue. On an average nearly 29% of road accidents are caused due to drunk driving. To avoid such accidents, precautionary measures using different technologies are taken. Drunk driving detection is process of detecting whether a person is drunk or not. It uses data provided by sensors and camera to detect whether the person is drunk or not. The data is further processed using specific algorithm and methods to detect whether person is drunk or not. It uses various detection methods like Iris recognition using Gabor filter, Neural network using face images, detection using speech, Non-invasive Biological sensors, detection using Driving pattern and Engine locking system.

]]>
Jul 2018
<![CDATA[Universal Electronic Student Course Registration Model (U-ESCRM)]]> Source:Computer Science and Information Technology  Volume  6  Number  2  

Ejiofor C. I.   and Okon, Emmanuel Uko   

Student course registration is an integral facet of university registration processes, which holistically cater for organizational resources: manpower and material. Although several approaches have been proposed in addressing student registration, this research paper provide a comprehensive approach with the aim of addressing comprehensiveness in courses registration through the integration of departmental units within the university into a single architecture framework. This architecture has the propensity in supporting organizational procedures and processes while lessening overhead costs associated with process depletions.

]]>
Jul 2018
<![CDATA[Cyberconflicts as a Threat for the Modern State]]> Source:Computer Science and Information Technology  Volume  6  Number  1  

Marek Górka   

The Internet quite naturally is becoming a 'new battlefield' or 'offers a new dimension' (the fifth to the: land, sea, air, and stratosphere) to the conflict. Cyberwar is another way of being in conflict in the long history of military technology, which forces new tactical and operational concepts. Global awareness of cyberwar has risen considerably in the last few years and many national states are preparing for defence and offensive operations. In fact, cyberwar is a part of the evolution of conventional war, which, on the other hand is related to the changes in the social, political and mainly technological sphere. What is being stressed is the need to examine the ethical implications, which lead to further questions and doubts whether the use of the techniques of cyber war may result in shorter and less bloody and consequently more 'ethical' conflicts? Cyber-attack doesn’t need to kill anyone or cause material loses, but it is still considered dangerous.

]]>
Jan 2018
<![CDATA[Motivating Energy and Resource Conservation Behavior by Gamification]]> Source:Computer Science and Information Technology  Volume  6  Number  1  

Huang Lin   and Daqing Hou   

Gamification is the use of game thinking and game mechanics in a non-game context to engage users and solve problems. The project, Motivating Energy and Resource Conservation Behavior by Gamification, is to complement the area about gamification effect on motivating energy savings. In this project, we are designing a web game called Castle War which implements the electricity usage and water usage data from the Smart Housing Project of Clarkson University. The game is designed to be a war game that can engage every resident from the Smart Housing Project. The resident who saves more energy on that day, will get more currency to spend in the game on that day. The players can use the game currency to build their own empires while at the same time form unions to battle with others. In addition, the game elements will be closely related to the environmental-friendly topic. The real world concept of carrying capacity of environment, the tradeoff between development and pollution will be reflected in this game. This project will be a great addition to the research area that involves how the computer-based virtual world engagement of gamified system would motivate the real world energy consumption behavior.

]]>
Jan 2018
<![CDATA[Analysis of Development of Dynamic S-Box Generation]]> Source:Computer Science and Information Technology  Volume  5  Number  5  

Amandeep Singh   Praveen Agarwal   and Mehar Chand   

Advanced Encryption Standard is a symmetric block cipher which is widely used in encrypting data by different organizations to make secure their data from being hacked. The only nonlinear part of Advanced Encryption Standard (AES) is S-Box (Substitution Box), which provides confusion in the algorithm. But the main limitation of the S-Box in AES is that it is a static one throughout the algorithm, which is the main center of attraction for the cryptanalyst to analysis the weakness for certain attacks. Since 2000 onwards a number of algebraic attacks on AES have been carried out, which challenged the security of AES. But at the same time till date a number of researches have being carried out for making AES more secure by using dynamic S-Boxes to provide more confusion to the cryptanalyst. In present paper we tried to address dynamic S-Box techniques and provide their analysis on the basis of S-Box properties, which are essential for secure S-Box construction like Non-linearity, XOR profile, Strict Avalanche criterion (SAC) and Bit independence criteria (BIC). Also these techniques are compared with the original AES results.

]]>
Nov 2017
<![CDATA[A Comparison between Characteristics of NoSQL Databases and Traditional Databases]]> Source:Computer Science and Information Technology  Volume  5  Number  5  

Mitko Radoev   

With the increasing popularity of NoSQL databases, the question arises if they have all characteristics of databases and can they be a real alternative to the relational databases in all application domains. This paper makes an attempt to systematize the most important characteristics of the traditional databases and then to analyze whether NoSQL databases have these characteristics. On this basis it is possible to draw a conclusion whether NoSQL databases have the necessary qualities to be called databases, or rather they are data stores with limited capabilities. The results of the comparison shows that none of the NoSQL DBMS under consideration covers more than 50% of the characteristics of the traditional databases so the use of the term "database" in respect of any one of them is not fully correct.

]]>
Nov 2017
<![CDATA[Algebraic Objects of MBFs and Recursive Computation of the Dedekind Number]]> Source:Computer Science and Information Technology  Volume  5  Number  4  

Tkachenco V.G.   and Sinyavsky O.V.   

In this article the whole set of n-1 rank Monotone Boolean Functions (MBFs) is divided into equivalence classes. It shows how the Dedekind number D(n) can be calculated by using this partition. Five formulas were found to calculate this number as well as the algebraic properties of MBF blocks.

]]>
Sep 2017
<![CDATA[GPS Tracking System Using Car Charger]]> Source:Computer Science and Information Technology  Volume  5  Number  4  

Kavish Atul Sanghvi   and Prianka Manoj Mestry   

GPS is Global Positioning System; a space-based satellite navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. GPS tracking system is a software system which allows you to track the route from any source to destination. But GPS tracking system using car charger is a modify concept of tracking system using GPS. This concept is to get a current location of a moving car which is driven by a common man and its user friendly. GPS tracking system using car charger uses car charger as power source for the system. It also displays latitude and longitude along with map.

]]>
Sep 2017
<![CDATA[Electronic Flight Bag in the Operation of Airline Companies: Application in Turkey]]> Source:Computer Science and Information Technology  Volume  5  Number  4  

Savaş S Ateş   

Electronic Flight Bag (EFB) is a sophisticated technology which is aimed to be used in paperless flight operation in the airplane. Using of EFB is mainly planning to increase the efficiency of the aviation companies. In the scope of this research, the procedures of EFB usage in the flight operations and the advantages to airline companies are investigated. In the first part of the paper, the concept of efficiency is determined and requirements of flight operations planning procedures are investigated with literature review. In the second part, electronic flight bag is defined with the properties, types, services, and usage conditions. In the last part, the efficiency of EFB usage is estimated with the data which is collected by semi-structured interview method. The research was completed with 6 different participations from different airlines companies in Turkey. In the result of the paper, time-saving in data updating process is found to be one of the several advantages of EFB. Moreover, with the help of accelerated information process, companies may save more time with faster aircraft turnaround.

]]>
Sep 2017
<![CDATA[Augmented Reality Musical App to Support Children's Musical Education]]> Source:Computer Science and Information Technology  Volume  5  Number  4  

Bruno Lemos   Ana G. D. Corrêa   Marilena Nascimento   and Roseli D. Lopes   

This article presents an Augmented Reality App to support musical learning of children. The App works by verifying whether sequences of musical notes of the melody are correctly colored in a printed pentagram (target). Meanwhile, an animation in Augmented Reality with a 3D character dancing to the rhythm of the melody is initiated. The App has been previously tested with six children and the results point towards the great potential of this tool to improve the process of children's musical education.

]]>
Sep 2017
<![CDATA[Bayesian Approach to Perceptual Edge Preservation in Computer Vision]]> Source:Computer Science and Information Technology  Volume  5  Number  3  

Ren-Jie Huang   Jung-Hua Wang   and Chun-Shun Tseng   

This paper presents a novel approach for preserving perceptual edges representing boundaries of objects as perceived by human eyes. First, a subset of pixels (pixels of interest, POI) in an input image is selected by a pre-process of removing background and noise. One by one as a target pixel, each of POI is subjected to a Bayesian decision. This approach is characterized by iteratively employing a shape-variable mask to sample gradient orientations of pixels for measuring the directivity of a target pixel, the mask shape is updated after each iteration. We show that a converged mask covers pixels that best fit the orientation similarity with the target pixel, which in effect fulfills the similarity and proximity principles in Gestalt theory. Subsequently, a Bayesian rule is applied to the converged directivity to determine whether the target pixel belongs to a perceptual edge. Instead of using state-of-the-art edge detectors such as Canny detector [1], a pre-process combining Gaussian Mixture Model (GMM) [2] and Difference of Gaussian (DoG) [3] is devised to select POI, wherein GMM is responsible for removing the background of an input image (first screening), whereas DoG for filtering noisy or false contours (second screening). Experimental results indicate that a great amount of computational load can be saved, in comparison with the use of Canny detector in our previous work [4]. Since the perceptual edges are useful for forming a complete object contour corresponding to the human visual perception, the results of this paper can be potentially cooperated with other more advanced object detection methods such as the deep learning-based SSD [5] to achieve the same effect as the human visual system in dealing with obscured or corrupted input images, whereby even if a target object is occluded by other objects or corrupted by rainy water, it can be still identified correctly, this feature should greatly enhance the operational safety of unmanned vehicles, unmanned aircraft and other autonomous systems.

]]>
May 2017
<![CDATA[HPC Services to Characterize Genetic Mutations through Cloud Computing Federations]]> Source:Computer Science and Information Technology  Volume  5  Number  3  

Manuel-Alfonso López-Rourich   Felipe Lemus-Prieto   Javier Corral-García   and José-Luis González-Sánchez   

One child in every 200 births may be affected by one of the approximately 6,000 monogenic diseases discovered so far. Establishing the pathogenicity of the mutations detected with NGS (Next-Generation Sequencing) techniques, within the sequence of the associated genes, will allow Precision Medicine concept to be developed. However, sometimes the clinical significance of the mutations detected in a genome may be uncertain (VUS, Variant of Uncertain Significance) which prevents the development of health measures devoted to personalize individuals' treatments. A VUS pathogenicity can be inferred thanks to evidences obtained from specific types of NGS studies. Therefore the union of supercomputing through HPC (High-Performance Computing) tools and the cloud computing paradigm (HPCC), within a Data Center federation environment offers a solution to develop and provide services to infer the pathogenicity of a set of VUS detected in a genome, while guaranteeing both the security of the information generated during the whole workflow and its availability.

]]>
May 2017
<![CDATA[Efficient Signature Pattern Generation by Using Latticed Bounding Box]]> Source:Computer Science and Information Technology  Volume  5  Number  3  

Ji-Hyeon Park   Hee-Jin Jung   and Jin-Woo Jung   

According to the growth of users' needs for security and human-machine interaction, many automated user verification method have been developed based on biometrics. One of the representative methods among them is on-line signature recognition. But, the previous on-line signature recognition methods require too much complex processing so that it is not well suited for the simple and fast processing application. In this paper, we deals with efficient signature pattern generation method by using the relatively adjustable latticed bounding box to make more efficient signature recognition for various human-machine interactions.

]]>
May 2017
<![CDATA[High Performance Spin-Orbit-Torque (SOT) Based Non-volatile Standard Cell for Hybrid CMOS/Magnetic ICs]]> Source:Computer Science and Information Technology  Volume  5  Number  3  

Kotb Jabeur   Gregory Di Pendina   and Guillaume Prenat   

Spin-orbit-torque magnetic tunnel junction (SOT-MTJ) is an emergent spintronics device with a promising potential. It resolves many issues encountered in the current MTJs state of the art. Although the existing Spin Transfer Torque (STT) technology is advantageous in terms of scalability and writing current, it suffers from the lack of reliability because of the common write and read path which enhances the stress on the MTJ barrier. Thanks to the three terminal architecture of the SOT-MTJ, the reliability is increased by separating the read and the write paths. Moreover, SOT-induced magnetization switching is symmetrical and very fast. Thus, doors are opened for non-volatile and ultra-fast Integrated Circuits (ICs). In this paper, we present the architecture of a mixed CMOS/Magnetic non-volatile flip-flop (NVFF). We use a compact model of the SOT device developed in Verilog-A language to electrically simulate its behaviour and evaluate its performances. The designed standard cell offers the possibility to use the usual CMOS flip-flop functionality. In addition, it enables storing and restoring the magnetic data by exploiting the non-volatility asset of MTJs when the circuit is powered off. With a 28nm dimension, the SOT-MTJ based NVFF demonstrated a very high speed switching (hundreds of picoseconds) with 7× decrease in term of writing energy when compared to the STT device.

]]>
May 2017
<![CDATA[Shape Grammars for Creative Decisions in the Architectural Project]]> Source:Computer Science and Information Technology  Volume  5  Number  2  

Joana Tching   Joaquim Reis   and Alexandra Paio   

Shape grammars (SG), which define a set of shapes, are used in applications in the field of Computational Creativity (CC). Computational Creativity can be considered an area of Artificial Intelligence (AI) that chases the goal of understanding creativity and building computational applications that emulate or support human creativity in Arts and Science. In this context, our aim is to show how SG may provide artists with applications to assist them in the creative process, not only creating solutions but also as a way of creating new ideas. Our objective is to demonstrate how, in architecture, SG can work with rules that will convey legal restrictions, space needs and goals of the architect, creating possible solutions to a project. A wide range of solutions can be tested in computational applications based in SG. These applications can also encourage the architect to go further in his creativity through shape emergence where the conditions are fulfilled and presented as innovative and/or unexpected. Architects obey strict rules when they apply artistic intention to a specific need/objective intention (space building). Thus, our methods are to enumerate SG as a tool for decision-making in architectural projects and to show a set of common phases that may be generated by the use of computational applications in response not only to technical needs but also to creative goals.

]]>
Mar 2017
<![CDATA[Distributed Backpressure Routing and Byzantine Generals Fault Detection for Electric Vehicles]]> Source:Computer Science and Information Technology  Volume  5  Number  2  

Evangelos D. Spyrou   and Dimitrios K. Mitrakos   

Electric vehicles (EVs) emerged in the transport domain, due to their energy efficiency and clean energy that they utilise. The electric vehicle routing problem is essentially a problem of selecting a set of minimum cost routes, while the demand of the customers is achieved. In this work, we model the electric vehicle routing problem using a wireless network methodology, namely the backpressure framework. Every route is imposed with a penalty, which includes the driving time of each road. We derive a weight as a function of the road queue backpressure and the driving time of a car. The next route for our EV is the one that has the highest weight. We show that this methodology leads to faster routes in that there are often roads with accidents or traffic jams. Also, we propose a fault detection mechanism that will ensure that there will be not deficiency in the electric vehicle routing process, in terms of wireless communication. We employ the Byzantine Generals algorithm to detect possible faulty wireless mediums in cars or traffic lights. We show how our approach is capable to detect faulty wireless mediums and we provide an alternative on if the consensus cannot be satisfied.

]]>
Mar 2017
<![CDATA[VUZALIZER: A Max/MSP Object for Real-time Generation of RCMC Canons]]> Source:Computer Science and Information Technology  Volume  5  Number  2  

Alba Francesca Battista   Nicola Monopoli   and Matteo Nicoletti   

Regular Complementary Canons of Maximal Category (RCMC) or Vuza Canons were introduced to the musical world with Dan Tudor Vuza's seminal papers at the pnm Conference in the 1990s. Musicians have always been intrigued by canon construction, q.v. the complex polyphony of the Flemish composer Josquin Desprez or the contrapuntal techniques of Johann Sebastian Bach, whose properties have been translated into formal algebraic terms. Nowadays the process of building mosaic canons can be implemented using programming languages allowing composers to use these extremely complex macro-structures. In this paper we will discuss our algorithm that allows composers to create and directly manage RCMC Canons. In addition, we will describe VUZALIZER, a Max/MSP object which generates Vuza Canons.

]]>
Mar 2017
<![CDATA[3DLBP and SVD Fusion for 3D Face Recognition Using Range Image]]> Source:Computer Science and Information Technology  Volume  5  Number  2  

El Mahdi Barrah   Rachid Ahdid   Said Safi   and Abdessamad Malaoui   

In this paper, we present a novel approach for fusing 3D Local Binary Pattrns (3DLBP) and singular Value Decomposition (SVD) for face recognition when "Kinect" is used as the 3D face scanner. Another approach is used to compare the 3DLBP method, fused with SVD, with other methods proposed in the literature for face recognition by using Kinect. Experimental results on FRGC 2.0 face dataset showed that the generated data by Kinect are discriminating enough to allow face recognition and that 3DLBP performs better than the other methods.

]]>
Mar 2017
<![CDATA[Using MQIS to Improve Medical Quality - In the Case of Local Hospital]]> Source:Computer Science and Information Technology  Volume  5  Number  2  

Chien-Wen Hung   

In order to improve medical quality, the Medical Quality Indicator System has been devoted to implementing the hospitals database system. Medical Quality Indicator is comprised of multiple sets of performance indicators for measuring different pattern care settings of the information system: Each care setting uses numerous Medical Quality Indicators for measuring quality. This study mines medical quality Indicators to assist managers in developing better medical quality and other relevant policies for a hospital. The association rules of the relational database design are implemented in the Medical Quality Indicators system which provides Medical Quality Indicators management and health policies. It provides medical manager with a useful tool to rapidly search for valuable information based on patient medical library information. Thus, it enables health managers to rapidly establish health strategies to enhance quality of medical care. In this study, Medical staffs can be based on three types of care indicators to effectively control the patient's weight. We can see the results of care from Figure 6. Three indicators are Acute Care Indicators and Psychiatric Care Indicators and Long-term Care Indicators. According to care indicators to carry out care content, Respectively, Rule 1, Rule 2, Rule 3. Rule 1 is the best to improve the patient's weight the best results. It weighs more than 60kg with 2052 patients. (2485*0.826=2052). Rule 1 is (AC Indicator 6 and PC Indicator 7 and LTC Indicator 4) ∩ (AC Indicator 15 and PC Indicator 3 and LTC Indicator 2) = (82.6%). By doing so, the health managers were no longer only acquiring and packaging knowledge, but creating and applying it as well.

]]>
Mar 2017
<![CDATA[NewsSE: An Ontology-based Search Engine for News]]> Source:Computer Science and Information Technology  Volume  5  Number  2  

H. Beheshti   F. Poorahangaryan   and S. A. Edalatpanah   

Rapid growth of information on the web and need to information sharing on one hand and also as news play an important role in our lives and the internet becomes the biggest repository for keeping this news on the other hand, lead us to research in this domain. In this study, we introduce a new framework for searching news by considering the relation between news and events. This framework, which is called news, considers news as a series of events in order to cover all aspects of news. News uses Domain Ontology and Event Ontology to extract the concepts and relations existed in news. Experience in using semantic techniques in search engine on the one hand and considering the relationships between events related to each news on the other hand cause an improvement in ranking results, obtaining more fresh results and cover more concepts related to a particular query compared to other search engines such as Google News and Yahoo News. The results show the Mean Average Precision (MAP) for this framework is 86.81 percent, which is higher than Google news and Yahoo! News. Furthermore the event ontology increase Average Precision about 0.8 percent compared to other traditional search engines.

]]>
Mar 2017
<![CDATA[Basic MBF Blocks Properties and Rank 6 Blocks]]> Source:Computer Science and Information Technology  Volume  5  Number  1  

Tkachenco V.G.   and Sinyavsky O.V.   

Considered all the Monotone Boolean Functions (MBFs) blocks of the sixth rank, it proved a series of new properties MBF blocks. The proposed methods can be used to analyze large ranks MBFs. The tables which describe all of the blocks from 4th to 6th rank are provided. On the basis of these tables are counted typical depending to these blocks.

]]>
Jan 2017
<![CDATA[Data Warehousing: A Practical Managerial Approach]]> Source:Computer Science and Information Technology  Volume  5  Number  1  

Max North   Larry Thomas   Ronny Richardson   and Patrick Akpess   

The primary goal of this article is to provide managers and executives the tools and information needed to make informed decisions concerning data warehousing, to understand the processes and technology involved, and to identify individuals' responsibilities. The information is presented in clear, understandable terms and is designed for decision makers with little or no Information Technology (IT) background.

]]>
Jan 2017
<![CDATA[Future of Humanity: Energy and Knowledge Engineering]]> Source:Computer Science and Information Technology  Volume  5  Number  1  

A. Ziya Aktaş   

Energy has become the keyword relevant to the economic and social development and sustainability of all countries as well as the ecological future of the world. Knowledge-based strategic planning and decisions in energy are therefore very crucial for all. Knowledge may be defined as the information needed for good planning and intelligent decisions for any human being, from a lay man to a top man in a public or private organization up to even the country president. In the article, after some introductory remarks about energy and knowledge, knowledge - energy relation is defined. Smart cities and intelligent utilities / smart energy, Energy Engineering and Knowledge Engineering terms are also elaborated. As a conclusion, the synergy and even a symbiosis between energy and knowledge is noted and it is stated that energy engineering and knowledge engineering together with nano science and technology will be very crucial for the future of humanity.

]]>
Jan 2017
<![CDATA[Technological and Information Governance Approaches to Data Loss and Leakage Mitigation]]> Source:Computer Science and Information Technology  Volume  5  Number  1  

Amie Taal   Jenny Le   Alex Ponce de Leon   James A. Sherer   and Karin S. Jenson   

While foreign national cyber-attacks tend to garner headlines, organizations should also consider "Data Leakage" incidents caused or perpetrated by insiders, whether intentional or otherwise. But addressing Data Leakage is especially tricky because of two integral aspects that require a nuanced approach to finding a solution: (1) Data Leakage is a problem that often affects organizations within their firewalls. Data Leakage therefore presents a conundrum where employees are both the potential creators as well as the potential solution(s) to an insider threat. Solutions to this conundrum present a challenge where strictly adhering only to an existing policy diminishes an organization's otherwise beneficial ability to react to rapidly changing environments. But organizations are not naturally policy-driven, as the vast majority of employees—and data transfers—are not puppets of an omniscient author. So, while a perfect policy with perfect application (by perfectly informed employees) would be the best solution, that panacea simply doesn't exist. (2) While Data Leakage can be malicious in nature, malicious intent need not exist. Most employees and data transfers are not solely policy driven (and therefore cannot be treated as such in service of their jobs). Instead, many—if not most—potential Data Leaks will be perpetrated by people accidentally or guided by malicious direction or incompetence. Considering the duality of roles employees play in Data Leakage and that the hazardous outcomes are often accidental, we conclude that strict policy adherence is neither feasible nor available. Instead, a partially directed, partially improvisational approach is an appropriate means by which an organization can consider and address Data Leakage issues associated with Insider Threats.

]]>
Jan 2017
<![CDATA[3D Visualization Applied to PRBGs and Cryptography- Long Version]]> Source:Computer Science and Information Technology  Volume  4  Number  5  

Michel Dubois   and Eric Filiol   

Today there is no easy and quick way to analyse and differentiate random data. However, all day long our computers generate pseudo random data, our cryptographic algorithms tend to act as pseudo random generator of data to better hide the message. So we can then ask whether is it possible to quickly determine the algorithm used to construct a random sequence of numbers and in a second time, distinguish between a PRBG or a cryptographic algorithm? In this paper, we present a new approach, to visualize, in a two and three dimensions environment at the same time, a sequence issued from a pseudo random bit generator or from cryptographic algorithms. To embody our idea, we assume that sequences produced by PRBG and Cryptographic algorithms are comparable to a nonlinear system generating a chronological series of data. We have developed some tools to realize our analysis and use them to well known kinds of PRBG and to the AES. Even, if our approach can't serve as determining proof on the quality of an alea, it can bring a great help to quickly (because visually) distinguish two random sequences and eventually find some statistical bias.

]]>
Oct 2016
<![CDATA[Response Time Analysis of Mobile Application DNUN in New Relic Monitoring Platform]]> Source:Computer Science and Information Technology  Volume  4  Number  5  

Karthik Reddy Nalla   and Hosam El-Ocla   

In this paper we present a case study using New Relic as a monitoring platform to analyse the performance of mobile applications. DNUN (Danger Notification and User Navigation) is the application assumed to be evaluated. DNUN is associated with a geolocation system to navigate to the location of an object for immediate/later use from anywhere on the globe. We use the response time as the metric that Application Performance Management uses to measure the reliability of the DNUN application. We install DNUN app into the New Relic program using gem and configuration files. The dashboard proves that the overall rating of the app is excellent based on the response time performance.

]]>
Oct 2016
<![CDATA[Efficacy of Philosophical Ethics Uptake in E-learning]]> Source:Computer Science and Information Technology  Volume  4  Number  4  

Brendan James Moore   and Syed Adeel Ahmed   

E-Learning and Distance Learning has grown as a field over the last twenty years. Business and Universities alike have an interest in being assured that E-Learning can be just as effective as traditional classroom settings. In this paper, seven years of survey data is presented, and analyzed, showing that an analytic, philosophic, ethics courses had noticeable effects on students' who took the course. Students' beliefs about the concepts of rightness and wrongness before taking an online analytic philosophy ethics course noticeably changed after completion of the course, indicating that students are less likely to be Simple Subjectivists after completing an analytic philosophy ethics course.

]]>
Aug 2016
<![CDATA[Multiple Criteria Decision Making for the Structural Organization of Software Architecture]]> Source:Computer Science and Information Technology  Volume  4  Number  4  

Sergey Orlov   and Andrei Vishnyakov   

Architectural decisions have a significant impact on the development process as well as on the quality of applied systems. On the other hand, it would be desirable to rely on mature solutions and proven experience when making such decisions. Partially this problem could be solved with the use of architectural patterns. Such solution for the same task can be implemented using different sets of patterns. As a result, there is a problem of choosing and evaluating the software architecture that is build using a number of patterns and that meets the system requirements. In this paper, the technique that allows selecting the optimal software architecture for applied software is proposed. This selection technique is reduced to the criteria importance theory problem. For applying it, we need to pick up a set of metrics that assess the characteristics of the software architecture. Next, we need to determine metrics scale and information about their importance. The results allow us making conclusions about usefulness of the proposed technique during architecture design phase for applied software.

]]>
Aug 2016
<![CDATA[Blocks of Monotone Boolean Functions of Rank 5]]> Source:Computer Science and Information Technology  Volume  4  Number  4  

Tkachenco V.G.   and Sinyavsky O.V.   

Based on the classification of monotone Boolean functions (MBFs) on the types and the method of building MBFs blocks, an analysis is conducted of the MBFs rank 5. Four matrixes for such MBFs are adduced. It is shown that there are 7581 MBFs rank 5, 276 of them are MBFs of maximal types. These 7581 MBFs are contained in 522 blocks or 23 groups of isomorphic blocks or 6 groups of similar blocks. The offered methods can be used to analyze large MBFs ranks. In previous articles were shown how MBFs used in telecommunications for analyzing networks and building codes for cryptosystems.

]]>
Aug 2016
<![CDATA[Modernization of Processes Control Methods for Digital Image Processing]]> Source:Computer Science and Information Technology  Volume  4  Number  4  

Tashmanov E.B.   

The article is devoted to studying the chasing problem games regarding to the levels of digital image brightness, described by discrete second order linear equations. We obtained sufficient conditions for finish of chase. Partly on a model example we showed that using the management in specified area, you could define a certain level of brightness in a digital image in case of presence of the player, which prevented this transition.

]]>
Aug 2016
<![CDATA[Experiences from the Series of International Robotics Workshops]]> Source:Computer Science and Information Technology  Volume  4  Number  3  

Richard Balogh   Grzegorz Granosik   Valery Kasyanik   and David Obdrzalek   

In this paper we summarize our experiences with the series of educational robotics workshops organized for a group of students from four schools in four countries. Brief description of the activities, their results and evaluation are presented.

]]>
Jun 2016
<![CDATA[Computer Simulation of Ants Escaping from a Single-exit Room]]> Source:Computer Science and Information Technology  Volume  4  Number  3  

Shujie Wang   Weiguo Song   and Xiaodong Liu   

Ants are social insects and generally experimentally tractable. For these reasons, ants are favored by researchers regarding their own crowd behavior. Based on the data gathered from our previous ant evacuation experiments, “selfish evacuation behavior” was not observed in the ants' experiments under stress conditions. To extend our understanding of the topic, we constructed a cellular automaton model to simulate the behaviors of ants in stress situations. Ant body size, shape, and actual speed gathered from previous ant evacuation experiments, were taken into account in our model. In previous ants' experiment of our work, a filter paper with the repellent substance was located on the opposite side of the experimental chamber from the exit. Inspired by this experiment, parameter D was introduced in our model, which represented the drift to move forward (toward the exit, away from the repellent substance). Every ant (An) in the room had the same D value. In our model, N ants were initially distributed randomly in the room and the sequential update rule was adopted. Considering of the temporal evolution of the number of ants escaping within each exit size, we presented experimental result of each exit width and used our model to simulate ants escaping through that exit width. Subsequently, some factors affecting evacuation efficiency were also studied.

]]>
Jun 2016
<![CDATA[A Cognitive Walkthrough towards an Interface Model for Shape Grammar Implementations]]> Source:Computer Science and Information Technology  Volume  4  Number  3  

Joana Tching   Joaquim Reis   and Alexandra Paio   

The present study arises from the interest in computing as an important partner in the design process and the new paradigms in design practice that emerge with the use of computation. Shape Grammars (SG) are an example of ruled-based systems that, used in applications in the field of computational creativity, might assist architects, designers and artists in the creative process, not only creating solutions but also as a way of developing new ideas. However, SG applications developed so far developed so far address neither the specific work of creative projects nor the computational knowledge and habits of the designers-in-general. With this in mind, this research intends to reveal our proposal of IM-sgi (the initials IM stand for Interface Model and sgi for shape grammar implementations), a model of interface for SG implementations that can help SG to be introduced in the project practice, as this is not a reality yet and could mean a great contribution for new creative and complex architectural and design projects. This paper presents the description of the analysis used to define the IM-sgi model, with the result of a Cognitive Walkthrough (CW) made to a group of SG implementations and with the interaction model of Scott Chase [1] as the basis to define the users and how they communicate with the SG implementation.

]]>
Jun 2016
<![CDATA[Detecting Malicious Behaviors of Software through Analysis of API Sequence k-grams]]> Source:Computer Science and Information Technology  Volume  4  Number  3  

Hyun-il Lim   

Nowadays, software is widely applied to increase accuracy, efficiency, and convenience in various areas in our life. So, it is essential to use software in our recent computing environments. Despite of the valuable applications of software, malicious behaviors caused by vulnerability of software threaten our secure computing environments. So, it is important to identify and detect malicious behaviors of software for maintaining computing environments. In this paper, we propose an approach to detecting malicious behaviors of software by analyzing information of API function calls. API function calls are essentially used to make use of various services provided by operating systems or devices in developing software. In addition, API functions can describe the behaviors of software because they perform predefined specific operations during program execution. In this paper, we classify API functions in Microsoft Windows operating systems, and propose an approach to representing malicious behaviors of software with API functions. We propose an approach to detecting malicious behaviors of software by analyzing dynamic API function calls. To increase the efficiency and the tolerance of the analysis, malicious behaviors are abstracted as sets of k-grams, and they can be identified by calculating similarity between the sets of k-grams and a sequence of API function calls.

]]>
Jun 2016
<![CDATA[Reduction Effect of Traffic Accidents by Function of Drowsiness Detection]]> Source:Computer Science and Information Technology  Volume  4  Number  2  

Masahiro Miyaji   

Drowsiness is thought as crucial risk factor which may result in severer traffic accidents. Recently driver's psychosomatic state adaptive driving support safety function has been highlighted to further reduce the number of traffic accidents. Consequently, reduction effect of psychosomatic adaptive safety function should be clarified to foster its penetration into commercial market. This research clarified root cause of traffic incidents experiences by means of introducing Internet survey. From statistical analysis of the traffic incidents experiences, major psychosomatic state just before traffic incidents was identified as haste, distraction and drowsiness. This research focused drowsiness of a driver while driving. By means of using the Kohonen neural network, this research created estimating accuracy to detect a state of drowsiness. As a self-organized map, this research introduced six types of facial expression. Finally, this research estimated reduction effect of driver's drowsiness in the traffic accident. Result of the estimation was verified by comparing to the reduction effect of ESC.

]]>
Apr 2016
<![CDATA[Blocks of Monotone Boolean Functions]]> Source:Computer Science and Information Technology  Volume  4  Number  2  

Tkachenco V.G.   and Sinyavsky O.V.   

This paper first proposed the method of constructing blocks of monotone Boolean functions (MBFs) is developed for classification and the analysis of these functions. Use of only nonisomorphic blocks considerably simplifies enumeration MBFs. Application of the method of construction blocks on the example of classification and analysis of MBFs from 0 to 4 variables is considered. This method can be used at counting of all MBFs of a given rank n.

]]>
Apr 2016
<![CDATA[SLUM: Service Layered Utility Maximization Model to Guide Dynamic Composite Webservice Selection in Virtual Organizations]]> Source:Computer Science and Information Technology  Volume  4  Number  2  

Abiud Wakhanu Mulongo   Elisha T. Opiyo Omulo   Elisha Abade   William Okello Odongo   and Bernard Manderick   

Dynamic webservice composition is a promising ICT support service for virtual organizations. However, dynamic webservice composition remains a nondeterministic polynomial (NP) hard problem despite more than 10 years of extensive research, making the applicability of the technique to problems of industrial relevance limited. In [48], we proposed a layered method, SLUM, to combat the problem. Analytically, SLUM overcomes the relative weaknesses of two widely used approaches in the literature – the local planning (hereafter L-MIP) strategy and the Mixed Integer Programming (S-MIP) method. Despite the promising benefits of SLUM, it's unknown to what extent and under what circumstances SLUM is better or worse than L-MIP algorithms and S-MIP. The research objective of the study was to investigate the relative performance of SLUM w.r.t S-MIP and L-MIP using two performance criteria: - solution quality and CPU running time. Several randomly generated two task workflows of monotonically increasing hardness in the number of webservices per task were used to benchmark SLUM against the other two algorithms. A set of numerical and statistical techniques were used to experimentally compare the solution quality and the running time growth of SLUM against L-MIP and S-MIP. We determined that SLUM generates solutions with an average quality of 93% w.r.t the global optimum. Further, we show that SLUM yields solutions that are 5% more quality than L-MIP. On the other hand, we established that L-MIP outperforms both S-MIP and SLUM by multiple factors in terms of computational efficiency. However, we find that for problem instances with less than 22 webservices per task, S-MIP is about 1.3 times faster than SLUM. Beyond n=22, the running time of SLUM teB, expressed in terms of the running time of S-MIP teA, is given by teB= teA0.78. We also establish that SLUM is asymptotically 3.6 times faster than S-MIP on average. We conclude that in order for a virtual enterprise broker to obtain maximum benefit from dynamic service composition, the broker should combine the three techniques in the following manner- (1) for service request without global constraints requirements, L-MIP is the most suitable method to use, (2) Where there is need for global constraints and the number of service providers per task is less than 22, S-MIP is most preferred and (3) in scenarios the number of service providers per task is more than 22 and there is a need to satisfy global constraints, SLUM is superior to both S-MIP and L-MIP.

]]>
Apr 2016
<![CDATA[Algebraic Theoretic Properties of the Non-associative Class of (132)-Avoiding Patterns of AUNU Permutations: Applications in the Generation and Analysis of a General Cyclic Code]]> Source:Computer Science and Information Technology  Volume  4  Number  2  

Chun P. B   Ibrahim A.A   and Garba A.I   

The author had in [1] and based on the report as in [2], established interplay between the adjacency matrices due to Eulerian graphs constructed by the application of AUNU numbers and the generation and analysis of a general linear code. That was achieved by constructing a [5 3 2] -linear code C of size M=8. This paper reviews such a construction of a linear code as in [1] extending the approach to a larger (linear cyclic) code which is a supper code say C1 of the [5 3 2]-linear code C of size M=8, ie C⊆ C1. To achieve this, the generator matrix G as in [1] that generated C is further developed to give a matrix say G1 which now spans a larger linear code C1 of length n=5, dimension K=4 and size M=32. This is attainable by exhausting the cyclic shifts in the rows of the matrix G to give G1. It is then shown through some existing remarks and proven theorems that the linear code generated by G1 is cyclic and has generator polynomial g(x)=1+x.

]]>
Apr 2016
<![CDATA[Performance Evaluation of an Improved Model for Keyphrase Extraction in Documents]]> Source:Computer Science and Information Technology  Volume  4  Number  1  

Awoyelu I.O.   Abimbola R.O.   Olaniran A.T.   Amoo A.O   and Mabude C.N.   

Keyphrases are one of the most important parts of a document that give an insight on how a specific document is related. Keyphrase extraction systems are becoming increasingly vital in extracting quality keyphrases. They extract quality phrases that describe a document at hand. Existing keyphrase extraction systems, that employed the unsupervised approach, extract non-domain-specific keyphrases, thereby producing generic keyphrases. An improved model for domain-specific keyphrase extraction in journal articles is therefore proposed in this study. It is a framework that employs document structure, term frequency and inverse document frequency, noun phrase identifier and domain knowledge for keyphrase extraction. Data used in this research include nouns and stop words in English Language. Author-assigned keyphrases were extracted from the International Journal of Data Mining and Knowledge Processing (IJDKP) between the year 2011 and year 2014 for building the domain knowledge and testing the system. It was implemented using Java programming language and MySQL query language. Evaluation was carried out using precision, recall and f-measure as performance metrics. The results obtained show that the proposed system yielded an average precision, recall and f-measure of 27%, 53% and 35% respectively compared to the existing model – MAUI - which yielded average precision, recall and f-measure of 23%, 45% and 35% respectively. This shows that the proposed model outperformed the existing model by 5%.

]]>
Jan 2016
<![CDATA[Artificial Intelligence in Knowledge-based Technologies and Systems]]> Source:Computer Science and Information Technology  Volume  4  Number  1  

Viktor Krasnoproshin   Vladimir Obraztsov   Vladimir Rjazanov   and Herman Vissia   

A modification of the paradigm of Artificial Intelligence (AI) is proposed in the paper. The modification is based on the assumption that there are algorithms which are inductive by construction, but can be mathematically proved. The content of traditional artificial intelligence concepts (knowledge, form of presentation, knowledge base, etc.) is determined within the proposed paradigm. The modification ensures unification of many concepts in the field of artificial intelligence.

]]>
Jan 2016
<![CDATA[Linking Library Profession and the Market Place: Finding Connections for the Library in the Digital Environment]]> Source:Computer Science and Information Technology  Volume  4  Number  1  

Behdja Boumarafi   

The fast-paced changing information environment calls for continual development of libraries and information institutions to cope with the changes of the digital environment. To ensure this, there is a need to understand the current employment trends to identify the competencies and skills required of professionals. The advent of information and communication technology (ICT) has created a real paradigm shift in Library operations so information professionals are working at the leading edge of the internet and the web technology; leading to more web-based services. Moreover, the digital environment is reshaping the whole context within which information is being generated, processed, and delivered through online networks. This has strategic significance for library professionals in facing the challenges of this ever-changing information environment in which ICT is the major source for survival and development. This is having a direct impact on library development; a development which is reflected in the literature as required by the emerging market place. The inevitable consequences for professionals are to be equipped with the necessary skills, traits and competencies applicable to the cyber environment to meet the demands of the ever changing job market. The paper identifies the characteristics required of library and information (LIS) workers to serve the expanding and changing information market of the 21st century, as reflected in job announcements posted on the listserv of the IFLA from January to October 2014 to create a synergy between the profession and the current employment trends. A sample set of 259 job advertisements posted by academic, public and special libraries are selected and are grouped in four categories these are: 1. Technology-based positions with 48.71% of the advertisements. 2. Public/information services skills listed by 22.87% of the openings. 3. Technical services skills scored 18.64% of the available jobs. 4. Personal attributes are mentioned in 9.74% positions. Most positions are available in Academic libraries.

]]>
Jan 2016
<![CDATA[Multi-attribute Decision-making Model of Grey Target for Information System Evaluation]]> Source:Computer Science and Information Technology  Volume  4  Number  1  

Sha Fu   Guang Sun   and Yezhi Xiao   

A kind of multi-attribute grey-target decision model base on positive-negative target center is put forward aimed at the complexity and uncertainty of the actual decision environment. Firstly, the optimal effect vector and the worst effect vector of the grey-target decision are respectively defined as the positive and negative clouts; secondly, the spatial projection distance between the positive and negative clouts shall be comprehensively considered, and the off-target distance is the basis of the vector analysis for space and then a new comprehensive target-eyes distance shall be acquired; then, build the goal programming model for objective function in accordance with comprehensive target-eyes distance, and make solution by that to get the index weight. At last, verify the feasibility and validity of the proposed grey-target decision model by the optional instance analysis of information system.

]]>
Jan 2016
<![CDATA[Impact of Business Problem Characteristics on the Architecture and Specification of Integration Framework]]> Source:Computer Science and Information Technology  Volume  4  Number  1  

Monika Tsaneva   and S. Kouzmanov   

Modern enterprise information environments usually host a lot of different software systems that are involved in a wide range of business tasks and operations. For a successful usage of the whole heterogeneous information environment of an enterprise, the integration framework used to connect different software systems should take into account the specifics of the business problem and scope. Different needs can vary a lot, but still they can be generalized into larger groups of common integration and data processing problems. This can be used as a foundation for researching the impact of business problems and needs on an integration frameworks used into and enterprise. This paper uses a brief, general classification of business problems and proposes a mapping between business needs and integration approaches.

]]>
Jan 2016
<![CDATA[Discovery of Gene-disease Associations from Biomedical Texts]]> Source:Computer Science and Information Technology  Volume  4  Number  1  

Wen-Juan Hou   and Bo-Yuan Kuo   

Due to the ever-expanding growth of biomedical publications, biologists have to retrieve up-to-date information from vast literatures to ensure they do not neglect certain significant publications. It becomes more and more important to deal with the extraction problem from the biomedical texts in an automatic way. The paper focuses on automatically identifying the relationships between human genetic diseases and genes from the biomedical literatures. The experimental data is retrieved from Mendelian Inheritance in Man (MIM) literatures of morbid in Online Mendelian Inheritance in Man (OMIM) database. We propose a hybrid method combining the rule learning and the statistical techniques. To collect the corpus used in the research, the first step is to find the sentences that include both the related human genetic diseases and genes mentioned from the morbid file, and they are regarded as the correct sentences. In the second step, the sentences that neither have the related human genetic diseases nor the genes mentioned from the morbid file are randomly selected, and they are regarded as the incorrect sentences. Next, the Memory-Based Shallow Parser is utilized to analyze these sentences to get some information in order to find rules in the following step. Then, some learning rules are obtained with a rule learner, ALEPH system. These generated rules are applied to catch the pairs of human genetic diseases and genes within one sentence. In the following, the study proposes a statistical approach, called Z-score method, to determine whether the pairs are valid or not. Finally, the experiments are made with considering some constraints and different numbers of rules. Furthermore, the evaluation metrics in the experiments are precision, recall rates, and F-scores.

]]>
Jan 2016
<![CDATA[A Ranking Algorithm for Mitigating the Influence of Contrived Ratings on Review Sites]]> Source:Computer Science and Information Technology  Volume  3  Number  6  

Keiichi Endo   Ryoki Horio   and Dai Okano   

In this paper, we propose a ranking algorithm for mitigating the influence of contrived ratings by spammers that try to manipulate rankings on review sites. We set a credibility level for each user, which is determined using the Pearson correlation coefficient of the rating given to an object by the user and estimated quality of the object. The estimated quality is the average value of the ratings weighted with the credibility level of raters. We propose a method that uses a provisional estimated quality calculated by eliminating the rating of a user when calculating that user's credibility level. Furthermore, we propose a method that considers the number of ratings given to an object when calculating the estimated quality of the object. Moreover, we demonstrate the superiority of the proposed methods by conducting a comparative experiment using an actual data set.

]]>
Nov 2015
<![CDATA[A Discovery of the Relevance of Eastern Four-valued (Catuskoti) Logic to Define Modular Transformations When There are Multiple Ways of Representing the Same Modular Transformation]]> Source:Computer Science and Information Technology  Volume  3  Number  6  

Madanayake R S   Dias G K A   and Kodikara N D   

There were two methods of doing modular transformations from Entity Relationship Diagrams to Class Diagrams according to international researchers. In order to establish which method would best suit software engineers, we conducted a survey by giving a group of students in the computer science field, whom we considered potential future software engineers. The results we got were valid, but did not match those of any of the international researchers. We found that this situation could only be explained using Eastern Four-Valued logic, also known by such names as Catuskoti and Tetralemma.

]]>
Nov 2015
<![CDATA[A Unified Framework for Two-guard Walk Problem]]> Source:Computer Science and Information Technology  Volume  3  Number  6  

John Z. Zhang   

We propose a unified framework to study the walk problem in a polygonal area by two collaborative guards. A walk is conducted by the two mobile guards on the area's boundary. They start at an initial boundary point, move along the boundary, and may possibly meet together again at an ending boundary point. It is required that the two guards maintain their mutual visibility at all times. Depending the geometric properties of the polygonal area, a walk may or may not be possible. In this work, three versions of the problem, namely, general polygon walk, room walk and street walk, are characterized in a unified manner in our framework. One additional merit of our framework is its simplicity. Applications of the walk problem by two guards include military rescue, area exploration, art gallery surveillance, etc.

]]>
Nov 2015
<![CDATA[A Grey Stochastic Multi-criteria Decision-making Method Based on Hausdorff Distance]]> Source:Computer Science and Information Technology  Volume  3  Number  6  

Sha Fu   

Aiming at stochastic multi-criteria decision problems brought by criterion value with extended grey number, a grey stochastic multi-criteria decision-making method based on Hausdorff distance is proposed. Firstly, definitions, calculation rules and distance formulas of expanded grey number random variables and expectations are given. Then based on grey decision matrix and natural state probability, expectation decision matrix of the extended grey number is obtained. Combined with the weight vector calculation of each criterion, positive and negative ideal solution distance for each program is then obtained. Ultimately, the relative nearness is determined and the programs are sorted depending on the size of value. Through case study, it verifies the feasibility and effectiveness of the proposed method.

]]>
Nov 2015
<![CDATA[The Design and Implementation of Combining the Standard of Data and Integrated Water Resource Data in Distributed Cloud Computing Environment]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Feng-Cheng Lin   Ting-Wu Ho   Chen-Yu Hao   and Che-Hui Lin   

Water Resource Agency (WRA) has integrated cloud computing technologies based on cloud service framework for setting up easy-to-use data exchanging portal. Due to climate change and its effects on global society, the sustainability of water resource became a significant and highly attentional issue. In Taiwan, the demands of water resource information have increased either government agency, private institution, non-profit organization or domestic. Therefore, WRA followed the trend of open data, expanded nodes of current cloud environment and upgraded the version of a management tool for keeping the high performance, large capacity and utility of the cloud environment. Cloud computing is designed to provide the platform for data storage and user-friendly application procedures and interfaces in WRA. Data security of personnel is seen as well in this paper. In addition, for proceeding with the correction and stability of Taiwan water exchanging data standard, several procedures and systems are development as counterparts that help the affairs of data examination.

]]>
Sep 2015
<![CDATA[A New Similarity Measure for Combining Conflicting Evidences]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Nadeem Salamat   and Nadeem Akhter   

In Dempster-Shafer (DS) theory, multiple information from the distinct information sources is combined to obtain a single Basic Probability Assignment (BPA) function. The well-known combination rule of Dempster-Shafer (DS) provides the weaker solution to the management of conflicting information at the normalization stage. Even this rule fails and provides the counter intuitive results while combining the highly conflicting information. This paper presents a new similarity measure for the combined average methods, where any distance measure between the body of information can be used. The numerical examples provide the promising and better intuitive results.

]]>
Sep 2015
<![CDATA[Business Process Linguistic Modeling – Philosophy & Principles]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Jozef Stasak   

This contribution deals with business process modeling problems with the use of linguistic approach, while the special attention is paid to business process modeling based on semantic networks and reference databases. It consists of three important parts. In the first part, a reader may find an answer for question "Why business process modeling based on linguistic approach?", the second part deals with problems of interconnection among key performance indicators KPIs and business process metrics items and the third part closing that contribution content deals with problems of that business process modeling functionality, which is based on semantic networks and reference databases as well.

]]>
Sep 2015
<![CDATA[Visual Saliency Based Multiple Objects Segmentation and its Parallel Implementation for Real-Time Vision Processing]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Hirokazu Madokoro   Yutaka Ishioka   Satoshi Takahashi   Kazuhito Sato   and Nobuhiro Shimoi   

This paper presents a segmentation method of multiple object regions based on visual saliency. Our method comprises three steps. First, attentional points are detected using saliency maps (SMs). Subsequently, regions of interest (RoIs) are extracted using scale-invariant feature transform (SIFT). Finally, foreground regions are extracted as object regions using GrabCut. Using RoIs as teaching signals, our method achieved automatic segmentation of multiple objects without learning in advance. As experimentally obtained results obtained using PASCAL2011 dataset, attentional points were extracted correctly from 18 images for two objects and from 25 images for single objects. We obtained segmentation accuracies: 64.1%, precision; 62.1%, recall, and 57.4%, F-measure. For real-time video image processing, we implemented our model on an IMAPCAR2 evaluation board. The processing cost was 47.5 ms for the video images of 640 × 240 pixel resolution. Moreover, we applied our method to time-series images obtained using a mobile robot. Attentional points were extracted correctly for seven images for two objects and three images for single objects from ten images. We obtained segmentation accuracies of 58.0%, precision; 63.1%, recall, and 58.1%, F-measure.

]]>
Sep 2015
<![CDATA[Basic Design of Visual Saliency Based Autopilot System Used for Omnidirectional Mobile Electric Wheelchair]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Hirokazu Madokoro   Keigo Shirai   Kazuhito Sato   and Nobuhiro Shimoi   

This paper presents a fundamental design of an autopilot system to actualize automatic locomotion for an electric wheelchair with emphasis on simplicity and functionality. For this study, we designed a novel electric wheelchair with advanced mobility using Mecanum wheels that actualize omnidirectional movements without turning and developed a prototype with consideration devoted to the exterior design. Moreover, our developed prototype is considered with devotion of the exterior design. Our design concept is Electric Personal Assistive Mobility Device (EPAMD), which has ease of integration with a person's daily life. To prevent collisions, ranging sensors and depth sensors are used for environmental recognition. This paper presents global locomotion and local locomotion as frame- works for an autopilot. For global locomotion, we address algorithms of visual landmark detection based on visual saliency and the creation of category maps based on adaptive and unsupervised machine learning. Our method visualizes time-series features and their relations on visual landmarks in a low-dimensional space. We examine a novel design and its possible adaptation to an electric wheelchair as an EPAMD, especially intended to improve the independence of elderly people in their daily life.

]]>
Sep 2015
<![CDATA[Particle-filter Multi-target Tracking Algorithm Based on Dynamic Salient Features]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Zhang Yan   Shi Zhi-guang   LI Ji-Cheng   and Yang Wei-ping   

In order to address the problem of tracking different moving targets in image sequence against a complicated background, this paper presents a particle-filter multi-target tracking algorithm based on their dynamic salient features. By making use of the research findings on visual attention, the algorithm adopts the robust dynamic salient features as a result of combining the gray-scale and details with the motion characteristics of such targets as the state vector of particle filter. The algorithm is highly robust as it contains salient features originating from the low-level features of the targets. Meanwhile the particle filter allows optimized estimation of non-linear and non-Gaussian models. As a consequence, the algorithm is capable of managing traces in tracking different targets and dealing with their appearance, disappearance, mergence, splitting and sheltering by obstacles. Experiments show that this new algorithm enables tracking of multiple targets in complicated image sequence.

]]>
Sep 2015
<![CDATA[GIS for Pandemic Zoning: Application of Brampton, Ontario, Canada]]> Source:Computer Science and Information Technology  Volume  3  Number  5  

Roland Daley   Flor Ferreras   John Orly   and Rifaat Abdalla   

This paper examines the use of Geographic Information System (GIS) in mass casualty zoning for a pandemic in the city of Brampton, Ontario. Canadians in general are familiar with the SARS outbreak in 2003 and the effect that it had on medical care facilities, human resources, and national economy. In the year of 2009, Canadians were faced with another pandemic, the H1N1 influenza. These pandemics created widespread panic among many residents and the lessons learned is that health officials need to effectively plan for another outbreak. Focusing on Brampton as one of the western cities in the Greater Toronto Area (GTA) in the Province of Ontario, Canada; where there is urbanization and concentration of population served by one main medical health facility, i.e., Brampton Civic Hospital. This research examined the capacity of current facility to deal with emergency surge. Specifically, where there is a need for large number of health care professionals to deal with emergency situations. The paper provides a model for emergency management professionals to utilize GIS as a decision-making tool in assessing the risks, tracking outbreaks, maintaining situational awareness, during pandemic outbreaks. It provides analysis of means and ways for minimizing disruptions which may occur in a community.

]]>
Sep 2015
<![CDATA[Classification of Trajectories Using Category Maps and U-Matrix to Predict Interests Used for Event Sites]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Hirokazu Madokoro   Kazuhito Sato   and Nobuhiro Shimoi   

This paper presents a method for classification and recognition of behavior patterns based on interest from human trajectories at an event site. Our method creates models using Hidden Markov Models (HMMs) for each human trajectory quantized using One-Dimensional Self-Organizing Maps (1D-SOMs). Subsequently, we apply Two-Dimensional SOMs (2D-SOMs) for unsupervised classification of behavior patterns from features according to the distance between models. Furthermore, we use a Unified distance Matrix (U-Matrix) for visualizing category boundaries based on the Euclidean distance between weights of 2D-SOMs. Our method extracts typical behavior patterns and specific behavior patterns based on interest as ascertained using questionnaires. Then our method visualizes relations between these patterns. We evaluated our method based on Cross Validation (CV) using only the trajectories of typical behavior patterns. The recognition accuracy improved 9.6\% over that of earlier models. We regard our method as useful to estimate interest from behavior patterns at an event site.

]]>
Jul 2015
<![CDATA[E-Government, Open Data, and Security: Overcoming Information Security Issues with Open Data]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Abdurakhmanov Abduaziz Abdugaffarovich   Varisov Akmal Abbasovich   and Nasrullaev Nurbek Bakhtiyarovich   

The focus of this article is to provide a proposed solution on how to deal with information security problems faced by nations seeking to implement e-government systems with open government data. Understanding what threats are posed to information security in e-governments, and how to properly assess and deal with them is necessary for having a functional system based around open government data. Through the implementation of new policies, legal structuring, and the development of new technologies to aid use, it is impossible to overcome these challenges.

]]>
Jul 2015
<![CDATA[Image Tamper Detection and Recovery based on Dilation and Chaotic Mixing]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Hao-Chun Wang   Wei-Ming Chen   and Ping-Yi Lee   

In this paper, we propose an efficient method for image tamper detection and recovery. We separate the image into several blocks, and share a block information to two other blocks. It means, there have two copies of one block information for each non-overlapping block. In implementing, we improve Lee's algorithm of watermark embedding, especially when the tampered area is a rounded region or a text field, and our scheme has higher PSNR value than Lee's method. During image recovery, we use the image inpainting approach to fill the tampered region of whole image. The experimental result shows that our scheme is effective than Lee's interpolation method.

]]>
Jul 2015
<![CDATA[Implementing Remote Presence Using Quadcopter Control by a Non-Invasive BCI Device]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Jzau-Sheng Lin   and Zi-Yang Jiang   

Extracting neural signals to control a quadcopter using wireless manner is proposed in this paper for hands-free, silence and effortless human-mobile interaction with remote presence. The brain activity is recorded in real-time and discovered patterns to relate it to facial-expression states with a cheap off-the-shelf electroencephalogram (EEG) headset-Emotic Epoc device. A tablet based mobile framework with Android system is developed to convert these discovered patterns into commands to drive the quadcopter-AR Drone 2.0 through wireless interface. First, neural signals are sequentially extracted from headset and transmitted to the tablet mobile system. In the tablet mobile system, large number of feature vector of EEG can be reduced by using Principle Component Analysis (PCA) to recognize the facial expression to generate suitable commands and driving the quadcopter through wireless interface. Finally, the quadcopter can fly smoothly in accordance with the commands converted by the EEG signals. The experimental results show that the proposed system can easily control quadcopters.

]]>
Jul 2015
<![CDATA[A Novel Smart Card-based Remote User Authentication Mechanism]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Deborah Uwera   and Dongho Won   

The past few years have seen a rapid progress of multi-user computing environments. Numerous security mechanisms have therefore been employed in a bid to ensure that sensitive information in computer systems does not get destroyed, copied or even altered by unauthorized users. Remote users attempting to login into a particular system would therefore have to authenticate themselves to the server and vice versa. This paper proposes a novel remote user authentication scheme using smart cards. Our scheme endeavors to be an efficient yet secure scheme, hence we chose to use only one-way hash functions and XOR operations, in order to avoid computationally complex operations. We also conducted a security analysis on our scheme to ensure that it is secure against possible known attacks.

]]>
Jul 2015
<![CDATA[Methods of Modeling the Process of Building the Conceptual Model of TIAV Multimedia System]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Beknazarova Saida Safibullayevna   

In the article describes methods for modeling the process of building a conceptual model of a multimedia system, described the principal model multimedia system, the development of an algorithm for constructing online system - Designer TIAV-multimedia systems are described information-functional model of the online system constructor TIAV-multimedia systems, presents the conceptual model of the process processing of information resources in TIAV-multimedia systems, as well as the information considered simulation model of discrete-continuous processes processing of information resources.

]]>
Jul 2015
<![CDATA[ICT and Emergency Volunteering in Jordan: Current and Future Trends]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Nidhal El-Omari   Mohamad Alzaghal   and Sameh Ghwanmeh   

Volunteering plays an essential role in the context of development of any society. In Jordan, the volunteerism concept started to have a considerable role in both economic and social security development. An Emergency volunteering system is the first line of defense against emergencies of all kinds before the intervention of the central government. Volunteering plays a vital role in minimizing human and monetary losses due to natural and man-made disasters. In the context of Information and Communications Technology (ICT), volunteering acquired momentum and new domain for its activity. In this paper, current and future trends for implementing ICT in volunteering systems in Jordan are discussed. An Emergency volunteering system in Jordan is proposed to enhance mitigation and response capabilities for future disasters in Jordan using the digital medium.

]]>
Jul 2015
<![CDATA[A Hierarchical Multilayer Service Composition Model for Global Virtual Organizations]]> Source:Computer Science and Information Technology  Volume  3  Number  4  

Abiud Wakhanu Mulongo   Elisha T. Opiyo Omulo   and William Okello Odongo   

A major benefit of service composition is the ability to support agile global collaborative virtual organizations. However, being global in nature, collaborative virtual organizations can have several virtual industry clusters (VIC), where each VIC has hundreds to thousands of virtual enterprises that provide functionally similar services exposed as web services. These web services can be differentiated on a high dimensionality of quality of service attributes. The dilemma the virtual enterprise broker is faced with is how to dynamically select the best combination of component services to fulfill a complex consumer need within the shortest time possible. This composite service selection problem remains a Multi-Criteria Decision Making (MCDM) NP hard problem. Although existing MCDM methods based on local planning are linearly scalable for large problems, they lack capabilities to express critical intertask constraints that are practically relevant to service consumers. MCDM global planning methods on the other hand suffer exponential state space explosion making them severely limited for large problems of industrial relevance. This paper proposes HMSCM: Hierarchical Multi-Layer Service Composition Model. HMSCM is based on the theory of Layering as Optimization Decomposition [28-31]. We view the service selection process as a "two layer network" where each layer is a subproblem to be solved. The objective of one of the layers is to maximize a local utility function over a subset of web service QoS attributes from a service consumer perspective. The objective of the other layer is to maximize a local utility function over another subset of web service QoS attributes from the perspective of the Virtual enterprise broker. We develop the algorithm: Service Layered Utility Maximization (SLUM) that extends the Mixed Integer programming model in [9]. We then formulate the problem at each layer in form of SLUM. Together, the two layers attempt to achieve the global optimization objective of the network. We show analytically how HMSCM overcomes the shortcomings of existing local planning and global planning service selection methods while retaining the strengths from each. i.e HMSCM is able to scale linearly with increasing number of QoS variables and number of web services while being able to enforce global intertask constraints.

]]>
Jul 2015
<![CDATA[Significant Location Detection & Prediction in Cellular Networks using Artificial Neural Networks]]> Source:Computer Science and Information Technology  Volume  3  Number  3  

Cristian-Liviu Leca   Ioan Nicolaescu   and Cristian-Iulian Rîncu   

Location services and applications, based on network data or global positioning systems, are greatly influencing and changing the way people use mobile phone networks by improving not only user-applications but also the network management part. These applications and services can be further developed by introducing location prediction. We design a system that logs cell id and timestamp data from the users' mobile device, detects the significance of the location to the user, such as home and workplace, and predicts future locations over a chosen time period using artificial neural networks. A novel method is designed for location detection that automatically determines the significance of the location to the user, by spatial and temporal analysis. In our approach, the neural network is automatically adapted, with the help of the location detection algorithm, to the period of the week for which a prediction is desired, achieving accurate weekday and weekend location prediction.

]]>
May 2015
<![CDATA[An Improvement of Plagiarized Area Detection System Using Jaccard Correlation Coefficient Distance Algorithm]]> Source:Computer Science and Information Technology  Volume  3  Number  3  

Kwangho Song   Jihong Min   Gayoung Lee   Sang Chul Shin   and Yoo-Sung Kim   

In this paper, a plagiarized area detection system is proposed in which Jaccard correlation coefficient is used for filtering to improve the processing time against huge volume of documents. Hence, the proposed system does filter to efficiently detect plagiarized area against huge volume of original documents by two algorithms; Jaccard coefficient distance algorithm and Cosine distance algorithm. Since Jaccard coefficient distance algorithm computes the distance between two document based only on the existence of words while Cosine distance algorithm uses word's frequency also, Jaccard coefficient distance algorithm is faster than Cosine one. Hence, for the efficiency, we use Jaccard coefficient distance algorithm as the first filter. According to the experiment result of the performance comparison between the proposed system and the previous our system, the newly proposed system outperforms the previous one with about 30% reduced processing time.

]]>
May 2015
<![CDATA[Automatic Shadow Removal by Illuminance in HSV Color Space]]> Source:Computer Science and Information Technology  Volume  3  Number  3  

Wenbo Huang   KyoungYeon Kim   Yong Yang   and Yoo-Sung Kim   

In intelligent video surveillance systems, the detected moving objects often contain shadows which may deteriorate the performance of object detections. Therefore, shadow detection and removal is an important step employed after foreground extraction. Since HSV color space gives a better separation of chromaticity and intensity, it has been commonly adopted to detect and remove shadow. However, almost all the HSV color space based methods use static thresholds to separate shadows from foreground. In this paper, a dynamic threshold based method is proposed. In the proposed approach, the threshold prediction model is first established by a statistical analysis tool and then the predicted dynamic thresholds are used for shadow detection. Experiments on a self-built dataset show that the proposed method can get better reliability and robustness than the traditional methods using static thresholds.

]]>
May 2015
<![CDATA[Algorithmization in Magneto-elasticity of Thin Plates and Shells of the Complex Configurations]]> Source:Computer Science and Information Technology  Volume  3  Number  3  

Fakhriddin M. Nuraliev   

The work is devoted to algorithmization with regard to solution of tasks classes in magneto-elasticity of thin plates and shells of complex form in plan, thus, adhering to the general theory of algorithmization offered by the academician V.K.Kabulov, formation, composition and structure of the basic (data, laws, principles, models, algorithms, applied programs) and auxiliary algorithmic (statement and operational) banks are described in brief.

]]>
May 2015
<![CDATA[A Preference-based Privacy Protection for Value-added Services in Vehicular Ad Hoc Networks]]> Source:Computer Science and Information Technology  Volume  3  Number  3  

Iuon-Chang lin   Yi-Lun Chi   Hsiang-Yu Chen   and Min-Shiang Hwang   

Due to the rapid growth of smart devices, the development of VANET tends to mature. Although many methods have been proposed to resolve the user privacy issue in vehicular ad hoc network (VANET), users still didn't know what information is collected (e.g. geolocation) and how to use .In this paper, we propose a secure and anonymous scheme for communication, which is based on blind signature techniques, and user can set their own privacy preferences before joining the VANET. Our proposed scheme lets user know whether his/her privacy preferences is suitable for VANET environment, and provide appropriate value-added service to user. Finally we will show our proposed scheme meets various security requirements.

]]>
May 2015
<![CDATA[Detecting Networks Anomalies and Attacks Using 3D Visualization]]> Source:Computer Science and Information Technology  Volume  3  Number  2  

Besnik Camaj   and Etienne Petremand   

3D modeling and visualization has become a key component of scientific research and development in many domains. Adding new routers, switches or firewalls, the computer networks are becoming more and more complex. The paradigm of introducing computer networks in three dimensions helps a lot to protect and provide services to networks. Therefore, the main purpose of this paper is detecting the network anomalies based on a 3D visualization of computer network. The interactivity between network administrator and the application happens in real-time. This means that if a breakdown occurs, the network administrator will be informed in real time and will be able immediately to switch off the broken device and then repair it. Practically, we realized a 3D modeling and visualization of our university computer network with more than thousand devices and it works well.

]]>
Mar 2015
<![CDATA[Security Techniques in Distributed Systems]]> Source:Computer Science and Information Technology  Volume  3  Number  2  

Reza Nayebi Shahabi   

The security of information systems is the most important principle that can also be said that the most difficult, because security must be maintained throughout the system. At the beginning of this article we are going to introduce basic principles of security. The securities of distributed systems are divided into two parts: A portion of the communication between users and processes is concerned with examining issues such as authentication, message integrity and encryption will be discussed. In the next section, we will examine the guaranteed access permissions to resources in distributed systems. In addition to traditional access solutions, access control in mobile codes will be examined.

]]>
Mar 2015
<![CDATA[Analysis of Support Vector Regression Model for Micrometeorological Data Prediction]]> Source:Computer Science and Information Technology  Volume  3  Number  2  

Yuya Suzuki   Yukimasa Kaneda   and Hiroshi Mineno   

This paper aims to reveal the appropriate amount of training data for accurately and quickly building a support vector regression (SVR) model for micrometeorological data prediction. SVR is derived from statistical learning theory and can be used to predict a quantity in the future based on training that uses past data. Although SVR is superior to traditional learning algorithms such as the artificial neural network (ANN), it is difficult to choose the most suitable amount of training data to build the appropriate SVR model for micrometeorological data prediction. The challenge of this paper is to reveal the periodic characteristics of micrometeorological data in Japan and determine the appropriate amount of training data to build the SVR model. By selecting the appropriate amount of training data, it is possible to improve both prediction accuracy and calculation time. When predicting air temperature in Sapporo, the prediction error was reduced by 0.1℃ and the calculation time was reduced by 98.7% using the appropriate amount of training data.

]]>
Mar 2015
<![CDATA[A SaaS-based Software Modeling for Bank Intermediary Business]]> Source:Computer Science and Information Technology  Volume  3  Number  2  

Bo Li   Wei-Tek Tsai   Haiying Zhou   and Decheng Zuo   

Software-as-a-Service (SaaS) is a new research orientation for developing software, and has multi-tenancy architecture and customization features, which are very suitable for performance and benchmark test of OLTP transactions. And Bank Intermediary Business (BIB) is the most important business of Bank financial system. This paper focuses on establishing the SaaS-based BIB performance and benchmark architecture and proposes the SaaS-based BIB Database Model (SaaS-BIB-DM), the architecture layer (SaaS-BIB-AL), the data flow view (SaaS-BIB-DF) and the representative transaction model (SaaS-BIB-TM). The database is further extended with the SaaS hybrid two-layer partition methodology and the performance is proved to be better than that in three-tier C/S architecture. And the specific SaaS-based BIB architecture which we proposed is 4-level SaaS-based architecture. Based on the analysis the state-of-art of BIB and SaaS, the paper further investigates future trend of SaaS-based performance testing architecture and benchmark.

]]>
Mar 2015
<![CDATA[Simulation and Analysis for Activities in Image Recognition Using MATLAB®]]> Source:Computer Science and Information Technology  Volume  3  Number  1  

Thabit Sultan Mohammed   and Ahmmed Saadi Ibrahim   

This paper considers a fact that solutions to problems in the field of digital image processing require lots of experimental work involving software simulation and testing with large sets of sample images. A short overview of the fundamental steps of digital image processing is presented. The layout and the operation of an experimental software system that has been developed and implemented using MATLAB® is introduced. A user friendly GUI is developed and two alternative methods for image acquisition are implemented. Few algorithms based on mask operators for image edge detection are studied, programmed, simulated, and evaluated. The paper also includes an analysis and a software implementation for an image matching technique.

]]>
Jan 2015
<![CDATA[Sobol' Sequences Application in Dynamic Stochastic Systems Optimization]]> Source:Computer Science and Information Technology  Volume  3  Number  1  

G. M. Antonova   

The paper overviews modern methods of optimization of systems parameters at design stage, based on application of LPτ sequences or Sobol' sequences with uniform distribution density and best property of evenness among modern uniform grids. Very often projected systems are complicated and bad formalized or even non-formalized. For such case accurate mathematical methods for searching of solution of multi-parameter multi-criteria optimization problem in modern computational mathematics are absent. If indices of quality are non-formalized and haven't strict expressions for derivatives of functions, it is useful the using of uniform distributed sequences for complicated functions testing in the procedures of searching of approximate “rational” solution of optimization problems. Usually “rational” solution represents some improvement of criteria values without application of accurate procedures for searching of extremum. The best property of grids evenness may give a significant acceleration of searching procedures. In the paper different procedures, demanded for examination of space of systems parameters, are considered and compared. An application of classical I.M. Sobol' and R. B. Statnikov procedure, PLP- search and LPτ-search with averaging algorithms for optimization of dynamic stochastic systems is discussed. The latest algorithm is the most suitable for optimization of non-formalized systems, adequately describing only by means of simulation models. Such variant of approximate optimization algorithm is named optimization-simulation. It is the most convenient for design of complicated modern devices and only for them optimization problem may be formulated and solved for case of criterion, given in form of continuous curve. Examples of optimization problems decisions for complex technical systems are shown.

]]>
Jan 2015
<![CDATA[Estimating Filipino ISPs Customer Satisfaction Using Sentiment Analysis]]> Source:Computer Science and Information Technology  Volume  3  Number  1  

Frederick F. Patacsil   Alvin R. Malicdem   and Proceso L. Fernandez   

Sentiment Analysis (SA) combines Natural Language Processing (NLP) techniques and text analytics to extract useful information from textual data. This study uses SA to estimate the Filipino internet customers' satisfaction related to the quality of the service provided by the Internet Service Providers (ISPs). Data were collected from Blog comments shared with online social media. Automatic word seed selection was applied using the word pair set {“Good” and “Slow”} as initial seed for the word dictionary. The Naïve Bayes method was used as a classifying tool to identify the dominant words used to express customers' sentiments and to determine the sentiment polarity of their opinions. The proposed automatic classifier successfully identifies positive and negative polarity of the blog sentences with a 91.50% accuracy in the training set. However, the results of the actual evaluation of the manually labelled test set show a drop in accuracy rate of 60.27%. Some of the reasons for this drop in accuracy are investigated in this paper.

]]>
Jan 2015
<![CDATA[Communications between Deaf and Hearing Children Using Statistical Machine Translation]]> Source:Computer Science and Information Technology  Volume  3  Number  1  

Mahdiyeh Alimohammadi   and Moteza Zahedi   

Communication with hearing people society is an important problem for deaf people. They have not learned the valid rules of spoken language that hearing people use them. We therefore prepare an efficient corpus and apply it to Moses Machine Translation to simplify these communications. We choose communications between children because they use e-communications more than adults. All systems that automatically process sign language corpus rely on appropriate data. So our corpus with a limited set of words and with specific subject is the first Persian corpus containing Persian language, PL, and Persian sign language, PSL, based on the domain of children's conversations. At the first step raw data are pre-processed which provides necessary information for translation. These data are statistic information extracted of sentences. After getting important data from initial sentences, the corpus is applied for training of Moses machine translation. In addition to the main goal of this system, we can educate deaf people the valid Persian grammar that is a problem for deaf people in school and society. In this paper, we compare our results with the results taken from the Moses decoder in other spoken languages that indicate our purpose is applicable in the real world.

]]>
Jan 2015
<![CDATA[Recursive Construction of n-gonal Codes on the Basis of Block Design]]> Source:Computer Science and Information Technology  Volume  2  Number  8  

Tkachenco V.G.   and Sinyavsky O.V.   

In the article are defined nonlinear n-gonal block codes. The methods of constructing n-gonal codes are considered. Suggest efficient universal recursive methods of constructing codes great length on the basis of block design for error-correcting code. On the basis of these methods can be constructed error-correcting codes for any predetermined number of errors. Among the codes with a predetermined length codeword and a predetermined number of units in the word, these codes will have a maximum number of codewords. Dignity of these codes is speed of encoding and decoding. Also possibility of fast change of a code without change of tables of encoding and decoding. This makes it possible to use these in cryptosystems.

]]>
Nov 2014
<![CDATA[Using Cloud Computing in Supporting the Management of Travel Agencies]]> Source:Computer Science and Information Technology  Volume  2  Number  8  

Wael Fouad Demerdash Mohamed   

The Cloud has become a new vehicle for delivering resources such as computing and storage to Customers on demand. Rather than being a new technology in itself, the cloud is a new business model wrapped around new technologies such as server virtualization that take advantage of economies of scale and multi-tenancy to reduce the cost of using information technology resources. Rapid development particularly in the field of information technology and the rise of cloud computing of various types widely enables for exchanging data in a more convenient method for the needs of the clients in sectors that operate in the same field, which utilize same work methodologies. This paper aims at shedding light on the importance of using cloud computing in creating a kind of integrity between "Local and International Tourism Agencies" by transforming a given data format to another software on a cloud provider. The concept will be applied on a number of local agencies in Egypt. Using c# with SQL Database for building a model for interaction and data exchange between the" local and international agencies"

]]>
Nov 2014
<![CDATA[A Study on Green IT Adoption]]> Source:Computer Science and Information Technology  Volume  2  Number  8  

Houn-Gee Chen   and Jamie Chang   

Green IT adoption is a plausible attempt for organizations to tackle the current environmental problem. The objective of this study is to examine the leading factors of Green IT adoption decisions. More specifically, we are interesting the issue of whether government support playing a key factor on determining Green IT adoption in developing countries. Based upon a survey of 64 organizations in Taiwan, the results indicated that environmental compliance (i.e., responding to the environmental regulation changes and citizenships), instead of economic consideration, was the driving force for organizations to adopt Green IT. Furthermore, government support, indeed, played an important role for leading organizations to pursuit their social responsibilities. Technological resources and governance toward green IT were also important factors for organizations to be ready to exercise their social responsibilities.

]]>
Nov 2014
<![CDATA[An Acceptance Model for E-Loaning Services among University Students]]> Source:Computer Science and Information Technology  Volume  2  Number  7  

Esther Makori   

The objective of this research paper was to develop a model that would explain the adoption of electronic loaning applications among University students using the decomposed theory of planned behavior. The researchers obtained the requirements for this model from the field study that they carried out. Before the implementation of the electronic loaning applications, applications were done manually using pen and paper, using forms which were later sent to the Helb headquarters in Nairobi for processing. This system faced setbacks such as losses of the application forms, inefficiency in loan applications processing, loaning repayment evasion due to lost records and lack of transparency and accountability in the loaning process. This led to the adoption of e-loaning applications by Helb, an institution charged with the disbursement of loans to university students. This paper sought to establish the factors that could better explain why e-loaning services users would prefer it over other loaning applications mechanisms. The target population for the study included the University student who are beneficiaries of Helb loans. Simple random sampling method was utilized to select respondents. The collected data was then tabulated and presented in graphs and interpreted. A hypothesized model was designed and the researchers used the study findings drop and adopt the hypothesized constructs. A model was afterwards developed, which the researchers gave the name DTPBMEL (Decomposed Theory of Planned Behavior Model for Electronic Loaning).

]]>
Sep 2014
<![CDATA[Construction of Cryptosystem on the Basis of Triangular Codes]]> Source:Computer Science and Information Technology  Volume  2  Number  7  

Tkachenco V.G.   and Sinyavsky O.V.   

In this article the triangular codes found dependence of the power of these codes on the length of codewords in them. The methods of construction codes and some triangular propose an effective general method for constructing such codes based on monotone Boolean functions. Designed with correcting codes cryptosystem on the basis of such a code, which allows one to correct the error. The advantage of this cryptosystem is the speed of coding and decoding, and the ability to quickly change the code without changing tables, coding and decoding. Change the code is implemented using the substitution, and the choice of codes is very large. The number of possible codes is 20! (approximately equal to 2.4*1018) has at length codeword of 20.

]]>
Sep 2014
<![CDATA[Clouding Technologies for Training]]> Source:Computer Science and Information Technology  Volume  2  Number  7  

Tatiana Zudilova   Svetlana Odinochkina   Victor Prygun   and Konstantin Kuzmin   

This paper presents a new approach to the organization of computer training on the basis of the private training cloud prototype designed and developed by the staff of Software Development Department at ITMO University. Modern IT technologies were used to create a private training cloud prototype which made it possible to consolidate high-performance computing tools, combine different classes of storage devices and offer these resources to both educators and trainees on demand.

]]>
Sep 2014
<![CDATA[Luminance-Free Color Detection for Quantification and Automatic Segmentation in Microscopy: A Methodological Approach]]> Source:Computer Science and Information Technology  Volume  2  Number  7  

Teresa Lettini   Gabriella Serio   Tiziana Valente   Flavio Ceglie   Alessandra Punzi   Rosalia Ricco   and Vittorio Pesce Delfino   

This procedure is aimed at solving the chromatic quantification problem in histological images, as well as other fields. On digital images, this is possible using colorimetric software applications. Our solution was to adopt a pre-processing step of the analog signal coming from a video-source (cabling a suitable, dedicated unit to the video-line before signal grabbing). The unit is a hardware device that processes the voltage values of the video signal image section, line by line in the raster image exploiting the vertical interval. The output is a luminance-free video signal compatible with real-time needs (1/25 second), that is in turn compatible with the normal exploration speed of a histological slide. Some experimental results are presented.

]]>
Sep 2014
<![CDATA[Using Sticker Model to Solve the Clique Problem on DNA-Based Computing]]> Source:Computer Science and Information Technology  Volume  2  Number  6  

Sientang Tsai   Wei-Yeh Chen   and Hui-ling Huang   

In this paper, it demonstrates how to use a sticker-based model to design a simple DNA-based algorithm for solving the clique problem. We first construct the solution space of memory complexes for the clique problem via the sticker-based model. Then, with biological operations, separate and combine, we remove those which encode illegal vertices from the solution space of memory complexes. The computation proceeds by using an inverted electronic version of gel electrophoresis to obtain a solution of the maximum clique problem.

]]>
Jul 2014
<![CDATA[Incident Response Planning for Data Protection]]> Source:Computer Science and Information Technology  Volume  2  Number  6  

Muhammad Adeel Javaid   

The aim of this paper is to provide an advisory service to organizations in the context of facilitating the development of their CSIR capabilities. A great deal of work has been published regarding the basis of network security policies and the process of setting up CSIRs. This paper examines the implications of European privacy law – specifically the Directive on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data (95/46/EC) – for CSIRTs handling information relating to incidents. In particular it examines when and how it is appropriate for a CSIRT to use information itself, and the circumstances in which it may be appropriate to disclose it to others.

]]>
Jul 2014
<![CDATA[Issues and Implications in an Information Technology Outsourcing Relationship]]> Source:Computer Science and Information Technology  Volume  2  Number  6  

Muhammad Adeel Javaid   

IT outsourcing is an arrangement in which a company subcontracts its information technology related activities to be executed by a different company. In the past several decades, as the role of information technology grew in the performance of a company, the fixed cost of maintaining up and running IT facilities and staffs was increasing as well. Therefore outsourcing solution was derived from companies’ need to achieve superior performance of IT functions with minimum amount of cost. Major classifications of IT functions that companies outsource are infrastructure and applications. Infrastructure outsourcing refers to a company resolving its entire IT activities handled by a contracted vendor company on the company’s behalf. Application outsourcing stands for a company subcontracting only its core IT applications such as ERP systems, document management systems or Business intelligence applications with service provider. Though the IT Outsourcing process might be a useful activity for the growth and resources of a service provider’s organization but at the same time it has some issues with multiple implications that need to be analyzed in detail. In this paper we take a look at IT Outsourcing process and analytically evaluate its effects on future growth of an organization.

]]>
Jul 2014
<![CDATA[Comparative Study of Challenges Affecting Adoption of E-Learning for Capacity Building in Public Service Sectors of Kenya and South Africa]]> Source:Computer Science and Information Technology  Volume  2  Number  5  

Kennedy Yegon   Raymong Ongus   and Alice Njuguna   

Over the past decade there has been a rapid increase of new technological advances and specifically use of internet to access information. Economic growth all over the world has continued to be dependent on information and communication technologies (ICT’s) and the abilities for countries to collect, process and use the digital information in teaching, learning, research and development. The e-learning revolution in developed countries has proven the use of technology can enhance growth and boost economic development. The purpose of the study was to compare challenges affecting adoption of e-learning for capacity building in public service sectors of Kenya and South Africa. Cluster sampling methodology was used and data analysed using SPSS. The study population represented participants who underwent through a capacity building course African leadership in ICT (ALICT) course offered by GESCI. This is a capacity building e-learning course targeted at mid senior level managers from the public service sectors. The sampled respondents represented Ministries of Education, Information Technology, Planning, Public service training institutions from the two countries. The study findings identified challenges that hinder the adoption of e-learning in the public service and cuts across Kenya and South Africa. These include infrastructure problem, lack of funds, and lack of policies favouring the use of e-learning, provision of reliable e-learning portals to the government employees.

]]>
May 2014
<![CDATA[Skin and Motion Cues Incorporated Covariance Matrix for Fast Hand Tracking System]]> Source:Computer Science and Information Technology  Volume  2  Number  5  

Mohd Shahrimie Mohd Asaari   Shahrel Azmin Suandi   and Bakhtiar Affendi Rosdi   

Hand tracking is one of the essential elements in vision based hand gesture recognition sys- tem. The tracked hand image can provide a meaningful gesture for more natural ways of Human Computer Interaction (HCI) systems. In this paper, we present a fast hand tracking method based on the fusion of skin and motion features incorporated covariance matrix. First, hand region is detected using a fusion of skin and motion cues, and a region of interest (ROI) is created around the detected region. During the tracking, skin and motion features are extracted around top, left and right corners of the ROI and hand displacement is measured using ROI based tracker. To increase the robustness, we incorporate a covariance matrix of the ROI window as a region descriptor to represent the target object. In the consecutive frames, we measure the distance descriptor covariance matrix (DDCM) between the target object and the covariance matrix extracted from new ROI position. When DDCM is not satisfying a certain acceptable threshold, ROI position is adjusted by shifting the ROI window around nearest neighbor to obtain a set of candidate regions. We assign a candidate region which has the smallest DDCM as the correct estimated ROI position. The experimental result shows that our approach can track the hand gesture under several real-live scenarios with a detection rate above 95% with the tracking speed at 42fps on average.

]]>
May 2014
<![CDATA[Towards a Good Cloud Computing Provider Other Than Choosing through Data Security and Privacy Capability Factor]]> Source:Computer Science and Information Technology  Volume  2  Number  5  

Duncan Waga   and Kefa Rabah   

Cloud computing (CC) is the assumed miracle solution for establishments who are keen on cost effective automation and is brought to a users door step through cloud computer service providers. There are many areas where a user should consider when selecting a vendor for a cloud services solution, from the vendor’s infrastructure and computing architecture framework to the jurisdiction within which the solution resides. In as much as its uptake is sky rocketing, there is a major challenge in selecting a provider who fits the bill. Most practitioners falsely believe that as long as a data security and privacy is mitigated then all is well. There are many other factors that users should confirm with the provider of their ability before any contract is signed of which this paper discusses. Issues to do with Intellectual property, jurisdiction, and portability of content are mentioned. Disappointed users with failed projects who end up with court cases are also included in the paper.

]]>
May 2014
<![CDATA[Cloud Computing Security and Privacy]]> Source:Computer Science and Information Technology  Volume  2  Number  5  

Muhammad Adeel Javaid   

The cloud computing paradigm is still evolving, but has recently gained tremendous momentum. However, security and privacy issues pose as the key roadblock to its fast adoption. In this paper we present security and privacy challenges that are exacerbated by the unique aspects of clouds and show how they're related to various delivery and deployment models. We discuss various approaches to address these challenges, existing solutions, and work needed to provide a trustworthy cloud computing environment.

]]>
May 2014
<![CDATA[Proposed Pricing Model for Cloud Computing]]> Source:Computer Science and Information Technology  Volume  2  Number  4  

Muhammad Adeel Javaid   

Cloud computing is an emerging technology of business computing and it is becoming a development trend. The process of entering into the cloud is generally in the form of queue, so that each user needs to wait until the current user is being served. In the system, each Cloud Computing User (CCU) requests Cloud Computing Service Provider (CCSP) to use the resources, if CCU(cloud computing user) finds that the server is busy then the user has to wait till the current user completes the job which leads to more queue length and increased waiting time. So to solve this problem, it is the work of CCSP

]]>
Apr 2014
<![CDATA[Dynamic Data Storage Publishing and Forwarding in Cloud Using Fusion Security Algorithms]]> Source:Computer Science and Information Technology  Volume  2  Number  4  

Asadi Srinivasulu   Ch.D.V.Subbarao   and A.Bhudevi   

A Cloud storage system consists of a collection of storage servers provide long-term Services over the internet. Storing data in other’s Cloud system causes serious concern over data confidentiality. Existing systems protect data confidentiality, but also limit the functionality of the system. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed. Proposed system consists of proxy re-encryption scheme integrated with a decentralized erasure code such that a secure storage system is constructed. Planned system not only supports secure and robust data, but also let user forward data in the storage system to another user without retrieving it back. Projected system fully integrates encrypting, encoding and forwarding. Proposed system analyzes and suggests suitable parameters for number of copies of messages delivered to storage servers and number of storage servers queried by key server.

]]>
Apr 2014
<![CDATA[Sybil Attack Detection in Vehicular Networks]]> Source:Computer Science and Information Technology  Volume  2  Number  4  

Ali Akbar Pouyan   and Mahdiyeh Alimohammadi   

Vehicular communication intends to improve the traffic safety for decreasing number of accidents and manages traffic for saving money and time. In vehicular communication, vehicles communicate wirelessly and so security of this network against attackers should be considered. To become a real technology that has public safety on the roads, vehicular ad hoc network (VANET) needs appropriate security architecture. Secure architecture should protect it from different types of security attacks and preserve privacy for drivers. One of these attacks against ad-hoc networks is Sybil attack that attacker is creating multiple identities that are identities belonging to other vehicles or dummy identities made by the attacker. Attacker is using them to gain a disproportionately large influence in the network leading to accidents or causing delay in some services for the driver using only one physical device. In this paper we present a case study of different selective methods for Sybil attack detection in vehicular networks and discuss about advantages and disadvantages of them for real implementation.

]]>
Apr 2014
<![CDATA[Business Intelligence or Intelligent Business?]]> Source:Computer Science and Information Technology  Volume  2  Number  4  

Gustavo A. Ortiz   

The implementation of Business Intelligence (BI) in companies requires a high degree of commitment from top level management and directors, a change in culture and organizational maturity, a true Metanoia. It is not only a matter of efficiently incorporate a new technology platform; it is an integrated new business. If given these and other conditions, the result will be a smarter and more competitive business.

]]>
Apr 2014
<![CDATA[Enhanced Teaching and Learning in Media Richness and Media Synchronicity Environments]]> Source:Computer Science and Information Technology  Volume  2  Number  4  

Newman John H   

This paper seeks to investigatethe selection of various communication media as they correspond to the subject matter and learning models. It examines relationships between and among the properties of the media used for communication and the underlying learning model used in instruction. The intended result is to enable a more informed selection of educational environments.

]]>
Apr 2014
<![CDATA[The Impact of E-Learning on Egyptian Higher Education and its Effect on Learner’s Motivation: A Case Study]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

Samir El-Seoud  Islam Taj-Eddin  Naglaa Seddiek  Pauline Ghenghesh  and Mahmoud El-Khouly  

Web-based learning tools provide integrated environments of various technologies to support diverse educators’ and learners’ needs via the Internet. An open source Moodle e-learning platform has been implemented at universities in Egypt as an aid to deliver e-content and to provide the institution with various possibilities for implementing asynchronous e-learning web-based modules. This paper shows that the use of interactive features of e-learning increases the motivation of undergraduate students for the learning process.

]]>
Mar 2014
<![CDATA[Survey of Context Information Fusion for Sensor Networks Based Ubiquitous Systems]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

Vijay Borges   and Wilson Jeberson   

Sensor Networks produce a large amount of data. According to the needs this data requires to be processed, delivered and accessed. This processed data when made available with the physical device location, user preferences, time constraints; generically called as context-awareness; is widely referred to as the core function for ubiquitous systems. To our best knowledge there is lack of analysis of context information fusion for ubiquitous sensor networks. Adopting appropriate information fusion techniques can help in screening noisy measurements, control data in the network and take necessary inferences that can help in contextual computing. In this paper we try and explore different context information fusion techniques by comparing a large number of solutions, their methods, architectures and models.

]]>
Mar 2014
<![CDATA[Analysis of a Complex Architectural Style C2 Using Modeling Language Alloy]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

Ashish Kumar Dwivedi   and Santanu Kumar Rath   

Software architecture plays an important role in the high level design of a system in terms of components, connectors, and configuration. The main building block of software architecture is an architectural style that provides domain specific design semantics. Although many architectural description languages (ADLs) are available in literature for modeling notations to support architecture based development, these ADLs lack proper tool support in terms of formal modeling and visualization. Hence formal methods are used for modeling and verification of architectural styles. In this study, an attempt has been made to formalize one complex style i.e., C2 (component and connector) using formal specification language Alloy. For consistency checking of modeling notations, the model checker Alloy Analyzer is used. It automatically checks properties such as compatibility between components and connectors, satisfiability of predicates over the architectural structure, and consistency of a style. For modeling and verification of C2 architectural style, one case study on cruise control system has been considered. At the end of this study performance evaluation among different SAT solvers associated with Alloy Analyzer has been performed in order to assess the quality.

]]>
Mar 2014
<![CDATA[Fingerprint Mosaicking Algorithm to Improve the Performance of Fingerprint Matching System]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

Sandhya Tarar   and Ela Kumar   

Mosaicking in biometrics is a topic of great interest among researchers these days, since it is an important step towards solving the problem of security of data. This paper presents design and implementation of Mosaicking algorithm. This technique is beneficial for providing authentic access as well as security to the biometric image template. We have designed an algorithm for fingerprint mosaicking in order to achieve the required performance. The proposed algorithm provides the effective solution of security issues regarding biometrics techniques without affecting the fingerprint quality. Results discussion section depicts the efficacy of proposed algorithm.

]]>
Mar 2014
<![CDATA[Adoption MIS in Middle Level Training Institutions in Kenya]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

Timothy Serem Kiptoo   Benjamin Kyambo   and Fredrick M. Awuor   

In today’s competitive environment, the only major challenge is effective management of information. The only sure way of achieving this is by handling gathered information in an efficient and effective manner using technology. These technologies come with their own challenges that may hinder organizations from fully adopting them. This study was done to establish the factors that influence the adoption of computer-based information systems in selected technical, industrial, innovation, entrepreneurial and training institutions in Kenya. Specifically, the study sought to establish the relationship between internal factors, external factors and personal factors on the adoption of computer-based information systems in middle level institutions in Kenya. The research design used in this study was case study with a target population of 160, Census method of sampling was used and questionnaires were used to collect data. Data was analyzed both qualitatively and quantitatively using descriptive and inferential statistics. The findings revealed that the institution has invested a lot of resources in management information systems and there is a significant relationship between external factors (22.30%), personal factors (21.20%) and internal factors (21.40%) on the adoption of computer-based management information systems in middle level institutions in Kenya. Results of the hypothesis revealed that there was a significant relationship between the dependent and the independent variables at 5% level of significance. The study concluded that institutions understand the need to adopt MIS and have even made attempt to facilitate its adoption. However there exists a gap in the usage of this equipment in the management of information. The study recommends an assessment on training on MIS, posting information on shared databases and lastly engaging the services of an IT company with necessary technical capacity at the initial stages of management information system (MIS) adoption.

]]>
Mar 2014
<![CDATA[Keyword Based Searching According to the Movie Names]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

Sunilkumar Reddy P   Govindarajulu   and Srinivasulu Asadi   

Keyword based queries are inherently ambiguous such that given a set of keywords the database search engine has only an uncertain guess about the user’s informational need represented by the query. Possibly high complexity of the data makes providing intelligent search results effectively extremely challenging. Databases enable users to precisely express their informational needs using structured queries. However, database query construction is a laborious and error-prone process, which cannot be performed well by most end users. Keyword search alleviates the usability problem at the price of query expressiveness. As keyword search algorithms do not differentiate between the possible informational needs represented by a keyword query, users may not receive adequate results. This paper presents Extended Incremental Query Processing - a novel approach to bridge the gap between usability of keyword search and expressiveness of database queries. Extended Incremental Query Processing enables a user to start with an arbitrary keyword query and incrementally refine it into a structured query through an interactive interface. The enabling techniques of Extended Incremental Query Processing include: 1) A probabilistic framework for incremental query construction; 2) A probabilistic model to assess the possible informational needs represented by a keyword query; 3) An algorithm to obtain the optimal query construction process. This paper presents the detailed design of Extended Incremental Query Processing, and demonstrates its effectiveness and scalability through experiments over real-world data and a user study. Extracting information from semi structured documents is a very hard task. Documents are often so large that the data set returned as answer to a query may be too big to convey interpretable knowledge. In this, we describe an approach based on Tree-Based Association Rules (TARs): mined rules, which provide approximate, intentional information on both the structure and contents of XML documents. This mined knowledge is later used to provide: a concise idea—the gist—of both the structure and the content of the XML document .quick, approximate answers to queries.

]]>
Mar 2014
<![CDATA[Target Inference on Evaluation of Angle Oriented Cluster ]]> Source:Computer Science and Information Technology  Volume  2  Number  3  

R.N.V.Jagan Mohan   and K.Raja Sekhara Rao   

In general, any field consists of unnecessary data. Several algorithms exist to remove unwanted data because it cannot seal to this processes. Research Scholars are still studying to complete this work. For Instance, face recognition system suffers in-depth pose verification problem over the last few decades. To solve this problem we used angle orientation technique. It consists of various angles of input images (same person with different direction) to compare with the database image. To remove needless data i.e., unsupervised image is the best solution to recognize a target inference. So with this idea we are attempting a small approach for this kind of applications. In this paper, we introduced a ternary cluster relation on angle oriented images. Again, various angles of images form into three nested clusters in Clock wise and/or Anti-clock wise directions. In this, we used multivariate analysis technique to improve the quality of cluster with the help of evaluation of cluster and also statistical approaches of tackle outlier detection methodology and bootstrapping technique to find the target inference. The experimental results are produced on angle oriented cluster images to increase the performance using analysis of variance test.

]]>
Mar 2014
<![CDATA[Three Strategies Tabu Search for Vehicle Routing Problem with Time Windows]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

Abdel-Rahman Hedar   and Mohammed Abdallah Bakr   

In the vehicle routing problem with time window (VRPTW), the objective is to minimize the number of vehicles and then minimize the total time travelled. Each route starts at the depot and ends at a customer, visiting a number of customers, each once, en route, without returning to the depot. The demand of each customer must be completely fulfilled by a single vehicle. The total demand serviced by each vehicle must not exceed vehicle capacity. An effective tabu search for vehicle routing with time window (TSTS-VRPTW) heuristic for this problem is proposed. The TSTS-VRPTW is based on three function MOVE, EXCHANGE, SWAP. Computational results on Solomon′s benchmarks that consist of six different datasets show that the proposed TSTS-VRPTW is comparable in terms of solution quality to the best performing published heuristics.

]]>
Feb 2014
<![CDATA[A Collaboration Facilitator Model for Learning Virtual Environments]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

Adriana Peña Pérez Negrón   

In virtual environments most pedagogical virtual tutors or facilitators supervise or guide the learning activity; they are task-oriented. In contrast, the here proposed facilitator is strictly about monitoring some aspects of collaboration and offering advice in this regard. In a multiuser virtual environment, that is, a Collaborative Virtual Environment, oral communication is chosen over written communication in order to enhance the feelings of presence, co-presence, and immersion for the user; but oral communication analysis presents a high resource overhead. As an alternative, the monitoring activity of this facilitator is based on two nonverbal cues of interaction: talking-turn patterns and object manipulation. An empirical study to validate this approach was conducted based on the participants’ perception regarding the suitability of the facilitator’s messages; the results showed that the students’ accepted a significant number of generated advice.

]]>
Feb 2014
<![CDATA[Real-Time 3D Reconstruction Using a Kinect Sensor]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

Claudia Raluca Popescu   and Adrian Lungu   

Nowadays, in the robotics industry, the point cloud processing gets more and more well-known due to the launching of Microsoft Kinect device in 2011. This paper aims to compare the methods for 3D reconstruction using a Kinect sensor and to rebuild a detailed model from an indoor scene in order to use it in a CAD application. The acquisition system allows the user to rotate around the target area and see a continuously updated 3D model of the desired object. For creating the final model, different viewpoints must be acquired and fused together in a single representation.

]]>
Feb 2014
<![CDATA[Apply Adaptive Threshold Operation and Conditional Connected-component to Image Text Recognition]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

Chuen-Min Huang   Yu-Kai Lin   and Rih-Wei Chang   

How to effectively extract text from an image is a critical issue in the text recognition domain. Due to the variety of background components, for example, different kind of colors, texture, or brightness in an image will deteriorate the problem of text recognition. In this research, we applied "adaptive threshold operation" and "conditional connected-component" to deal with non-uniform lightness and complicated background images. Different from the general procedure of using the whole image to separate the background from the objects, our research adopted the divide and merge strategy to tackle this problem. Instead of segregating the grayscale image into many regions, our approach partitioned an image into three equal-sized horizontal segments to identify the local threshold value of each segment efficiently. With this approach, we successfully identified and recognized texts from an image. The result shows that the rates of object identification and recognition achieve 81.17% and 91.30%, respectively.

]]>
Feb 2014
<![CDATA[Cost Effective Multimedia E-Learning Application for Nigerian Higher Institutions]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

Falaye Adeyinka Adesuyi   Adama Ndako Victor   Osho Oluwafemi   Ugwuoke Cosmas Uchenna   and Ogunlana Olushola Gabriel   

The necessity of good reliable modern and cheap communication and information transfer within an institution cannot be overemphasized. However, the expensive cost of using internet subscriptions and telephone technology make this prospect difficult to actualize. In another vein, limitation in space of conducive learning environment vis-à-vis the number of students has made effective teaching and learning nearly impossible. In this research work, we designed and implemented an intranet-based communication and e-learning system, as a unified system, to offer seamless institution-wide communication, at low cost, and remote learning, by use of the local intranet networks already present in most Nigerian tertiary institutions. This system is to provide high quality VOIP Calls, Video Conferencing, Network TV, E-Classrooms, File sharing, cheap customized SMS, Audio/Video/File messaging, Search FM Radio Utility, News, Entertainment, and lots more. Some intranet related issues bothering on accessibility and duplicates were also addressed. This system was developed with Microsoft Visual Basic 6.0 and ASP.NET Visual Studio 2010.

]]>
Feb 2014
<![CDATA[Resource Demand Prediction and Carbon Emission Estimation for Data Centers]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

San Hlaing Myint   and Thandar Thein   

The energy consumption of data centers has become a key issue in today’s ICT sector and a significant factor of green environment. A substantial reduction in energy consumption can be made by powering down servers when they are not in use. In cloud data center, it is very hard to manage and allocate their resource to incoming dynamic workload demands. Predicting the required resource demand, it can save the data center’s resource wasting and achieve max-profit and min-risk. Without proper prediction, data center can have overprovision and underprovision, which can cause resource waste and significant financial penalties. So an efficient resource management scheme is needed to reduce energy consumption and carbon dioxide (CO2 ) emission. The aim of the present study is to develop model for predicting the future resource demand and estimation of CO2 emission by comparatively assessing the suitability of several machine learning techniques. In order to reduce processing overheads, feature selection is conducted in prediction model. To estimate the CO2 emission, Power model and Carbon model are also developed. Experiment is conducted on real world workload traces and results show that prediction model can predict future resource demand with acceptable accuracy.

]]>
Feb 2014
<![CDATA[Enhancement of Conference Organization Using Ontology Based Information Correlation]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

Hassan Noureddine   Iman Jarkass   Maria Sokhn   Omar Abou Khaled   and Elena Mugellini   

The world today witnessed an important transmission to the virtual world across the web. After years of interaction and exchanges between people, the web becomes saturated with enormous quantity of data in various fields. Therefore, the (semi-)automatic applications become necessity to find appropriate information in a brief time. In this context, we propose a new approach of ontology based information correlation from various web resources. As validator of this approach, we introduce the conference organizer system that will be useful during the setting up of a conference. Where we benefit from semantic web technologies to extract, correlate, rank and store information, and consequently propose a ranked lists of experts and social events depending on the user requests.

]]>
Feb 2014
<![CDATA[A Simulation Study of Managed Qualitative Thinking Support Systems (MQTSS)]]> Source:Computer Science and Information Technology  Volume  2  Number  2  

John H. Newman   

The continued growth in the volume of available domain and technical data has been facilitated by a corresponding advancement in information and communication technology. This “information overload” can result in inefficient use of time and resources as well as the creation of recommended courses of action that are overruled by the decision makers’ judgment and experience. In order to address these problems, multiple knowledge sources and an inference process capable of mirroring the human thought processes (especially judgment and experience) must be available at the right time to the persons or groups needing the knowledge for decision making. Such a concept can be referred to as MANAGED QUALITATIVE THINKING SUPPORT SYSTEMS (MQTSS). Traditional decision support systems (DSS) rely upon decision maker or staff expertise to render knowledge in support of decision making. If the decision maker or staff has insufficient domain or technical expertise to utilize the DSS’s embedded models, interpret results, or implement the recommendations, knowledge delivery may be compromised or rendered ineffective. MQTSS can alleviate these support problems and improve knowledge delivery for decision making by reducing knowledge search times, streamlining decision-making tasks, reducing decision time, and promoting appropriate qualitative thinking,. The MQTSS approach theoretically can enhance the decision making process and decision outcomes. This paper attempts to replicate an earlier study by this author to test the theory. First, the MQTSS approach is presented. Next, an information system is created to deliver the technology to management. Finally, a simulation experiment is reported that compares the effectiveness of support rendered by a traditional decision support system and the created MQTSS information system. The paper closes with conclusions and implications for information systems research.

]]>
Feb 2014
<![CDATA[The Reliability of Circuits in the Basis Anticonjunction with Constant Faults of Gates]]> Source:Computer Science and Information Technology  Volume  2  Number  1  

M. A. Alekhina   and O.Yu. Barsukova   

We consider the realization of Boolean functions by asymptotically optimal reliable circuits with constant faults at the outputs of the gates in the basis {x|y} (where x|y п»„- anticonjunction i.п»„. ). It is proved that almost all Boolean functions can be realized by asymptotically optimal reliable circuits that operate with unreliability asymptotically equal to 2ε01 at ε0, ε1 → 0, where ε0 – probability of faults of type 0 at the output of basis gate, ε1 – probability of faults of type 1 at the output of basis gate.

]]>
Jan 2014
<![CDATA[Critical Factors of XBRL Adoption in Nigeria: a Case for Semantic Model-Based Digital Financial Reporting]]> Source:Computer Science and Information Technology  Volume  2  Number  1  

Mathias Gboyega OGUNDEJI   Ebenezer OLUWAKAYODE   and Oladipupo Muhrtala TIJANI   

The application of Technology Acceptance Model 2 (TAM 2) has reached remarkable heights in theory and practice. However its application to recent development in corporate reporting has been limited especially in emerging economies. To address this gap, we extend the theory to eXtensible Business Reporting Language (XBRL) adoption in Nigeria from the view of external auditors in the Big Four accountancy firms. XBRL usage, its perceived usefulness, and perceived ease of use were tested with subjective norm, image, job relevance, results demonstrability, and output quality. Through the path analysis among these TAM 2 variables, the results indicated that path magnitudes were significantly altered by XBRL model acceptance. The view of respondents reflects positive effect of perceived usefulness on intention to adopt XBRL. There is strong effect of perceived ease of use on perceived usefulness. Output of data analysis also indicate positive effect of results demonstrability on perceived usefulness while output quality influences perceived usefulness as job relevance also impacts on perceived usefulness. This might suggest that if the use of XBRL delivers improved data accuracy and information transparency it is expected that users would perceive the technology as valuable. The study is the first of its kind in Nigeria to observe determinants of XBRL acceptance with regards to professional accountants in the Big Four Audit firms.

]]>
Jan 2014
<![CDATA[Application of M-Gov to Provision of Education for all in Developing Nations]]> Source:Computer Science and Information Technology  Volume  2  Number  1  

Benard Maake   and Fredrick Awour Mzee   

Due to advances in wireless and mobile technology, a lot of applications have been developed in the areas of health, entertainment, education, agriculture, education among other to harness the benefit of this technology and to provide services closer to the users. In addition, this technology enables the government and private sectors to deliver, manage, organize and disseminate services to public in an more efficient and economical manner. Providing quality and free for all education that is readily accessible is critical in developing countries where most citizens live on less than a 1 USD a day. In these countries, most learners trade off time for education for casual work to earn a living. In such a case, the learners are denied right of access to quality and free education. It is therefore important to illustrate how education managers could adapt mobile and wireless technology to facilitate, control and manage capacity development towards the achieving education for all In this paper, we explore the benefits of mobile governance (m-Gov) of educational resources as a tool to deliver free, quality and accessible education. The paper argues that such an ICT application needs to incorporate all the education stakeholders to attain accountability and transparency on provision of education services. The paper also illustrates the key concepts in implementation of the argued ICT application i.e., m-Gov.

]]>
Jan 2014
<![CDATA[Support Vector Machine and Least Square Support Vector Machine Stock Forecasting Models]]> Source:Computer Science and Information Technology  Volume  2  Number  1  

Lucas Lai   and James Liu   

This paper explores the Support Vector Machine and Least Square Support Vector Machine models in stock forecasting. Three prevailing forecasting techniques - General Autoregressive Conditional Heteroskedasticity (GARCH), Support Vector Regression (SVR) and Least Square Support Vector Machine (LSSVM) are combined with the wavelet kernel to form three novel algorithms Wavelet-based GARCH (WL_GARCH), Wavelet-based SVR (WL_SVR) and Wavelet-based Least Square Support Vector Machine (WL_LSSVM) to solve the non-linear and non-parametric financial time series problem. This paper presents a platform for comparison of the wavelet-based algorithm using Hang Sang Index, Dow Jones and Shanghai Composite Index which has significant influence to each other. It has been discovered that wavelet-based model is not as good as the LS-SVM model. The best result is from LS-SVM without wavelet-based kernel.

]]>
Jan 2014
<![CDATA[Knowledge Extraction in Fuzzy Relational Systems Based on Genetic and Neural Approach]]> Source:Computer Science and Information Technology  Volume  2  Number  1  

Alexander Rotshtein   and Hanna Rakytyanska   

In this paper, a problem of MIMO object identification expressed mathematically in terms of fuzzy relational equations is considered. We use the multivariable relational structure based on the modular fuzzy relational equations with the multilevel composition law. The identification problem consists of extraction of an unknown relational matrix and also of parameters of membership functions included in the fuzzy knowledge base, which can be translated as a set of fuzzy IF-THEN rules. In fuzzy relational calculus this type of the problem relates to inverse problem and requires resolution for the composite fuzzy relational equations. The search for solution amounts to solving an optimization problem using the hybrid genetic and neural approach. The genetic algorithm uses all the available experimental information for the optimization, i.e., operates off-line. The essence of the approach is in constructing and training a special neuro-fuzzy network, which allows on-line correction of the extracted relations if the new experimental data is obtained. The resulting solution is linguistically interpreted as a set of possible rules bases. The approach proposed is illustrated by the computer experiment and the example from medical diagnosis.

]]>
Jan 2014
<![CDATA[An Assessment Model for the State of Organizational Readiness Inservice Oriented architecture Implementation Based on Fuzzy Logic]]> Source:Computer Science and Information Technology  Volume  2  Number  1  

Akram Hedayati   Babak Shirazi   and Hamed Fazlollahtabar   

Growth changes in organizations and on the other hand strong need to utilization of information and communication technology possibly implied applying changes in organization in accordance with changes in functionality. What’s more, information technology structure, hardware, software and information infrastructure have to be synchronized. One of the most important approaches leading to this goal are service oriented architecture. Service oriented architecture (SOA) is a style of information systems architecture supporting loose coupling of services for flexibility and interoperability of systems and is independent of the technology. The most important achievement of creating service oriented architecture is the need for increasing flexibility, speed and agility. Consequently, it changes in the structure of its organization. According to broad scope of enterprise architecture projects, it is necessary before implementation and the related major cost, organization would be notified of its readiness to get right decisions, develop appropriate strategies and adopt new approach. In this paper, we propose a model for evaluating organizational readiness in order to implement SOA with the use of Mamdani fuzzy logic. According to the results of this assessment method, we can decide more precisely whether an organization is ready to adopt SOA or not. By using this assessment model, organizations can realize their strengths and weaknesses identify their improvement areas and thus increase their readiness. Consequently this model is tested by means of a case study in Tehran municipality ICT organization.

]]>
Jan 2014
<![CDATA[Park-A-Lot: An Automated Parking Management System]]> Source:Computer Science and Information Technology  Volume  1  Number  4  

Kuo-pao Yang   Ghassan Alkadi   Bishwas Gautam   Arjun Sharma   Darshan Amatya   Sylvia Charchut   and Matthew Jones   

This paper describes the architecture and design of Part-A-lot, an automated parking management system. It explains the working dynamics of this prototype system and its communication with the website to find a parking spot before arriving at the destination. This system proposes a solution for urban parking problems. It minimizes the hassles of existing issues and provides an implementable model for parking lots in an urban setting.

]]>
Dec 2013
<![CDATA[Traffic Incident Detection Based on the Grid Model]]> Source:Computer Science and Information Technology  Volume  1  Number  4  

Wei-Lieh Hsu   Po-Lun Chang   and Rueiher Tsaur   

Highway accidents significantly impact normal traffic flow. Consequently, automatic detection of abnormal traffic events has gradually attracted the attention of researchers interested in intelligent transportation system. This work presents a vision-based approach for automatic traffic congestion and incident detection. The proposed approach involves extracting entropy-based features to create a grid model that simulates dynamic traffic flow behavior. When an unusual event occurs in the lane of the vehicle employing the system, the system can immediately detect it and issue signals to approaching vehicles to prevent accidents. Experiments conducted using various simulation results clearly demonstrate the validity and effectiveness of the proposed approach for managing traffic congestion and detecting incidents.

]]>
Dec 2013
<![CDATA[More on Intuitionistic Neutrosophic Soft Sets]]> Source:Computer Science and Information Technology  Volume  1  Number  4  

Said Broumi   and Florentin Smarandache   

Intuitionistic Neutrosophic soft set theory proposed by S.Broumi and F.Samarandache [28], has been regarded as an effective mathematical tool to deal with uncertainties. In this paper new operations on intuitionistic neutrosophic soft sets have been introduced. Some results relating to the properties of these operations have been established. Moreover, we illustrate their interconnections between each other.

]]>
Dec 2013
<![CDATA[Hindrance of ICT Adoption to Library Services in Higher Institution of Learning in Developing Countries]]> Source:Computer Science and Information Technology  Volume  1  Number  4  

Fredrick Mzee Awuor   Kefah Rabah   and Benard Magara Maake   

The adoption of ICT has revolutionized service provision in libraries and their general information management systems. This has transformed most services to digital: e-database (e-resources), e-catalogs, e-library and use of archiving technology like DSpace. Today, within the developing world, most libraries are moving towards transforming their existing traditional library services to digital systems - allowing them to tap and benefit from the vast advantages of ICT, for example, operation costs reduction, increased efficiency, an on-the-fly availability of information. Even with such numerous benefits, most Higher Institutions of Learning (HILs) in developing countries still lag behind on adoption of ICT in their library services. This paper seeks to investigate the challenges that hinder the adoption of ICT in libraries with special attention to HILs in developing. Further, solutions and recommendations to address these challenges are presented with case study analysis.

]]>
Dec 2013
<![CDATA[Constructing DNA-based Parallel Adder with Basic Logic Operations in the Adleman-Liption Model]]> Source:Computer Science and Information Technology  Volume  1  Number  4  

Sientang Tsai   

It is shown first by Adleman that deoxyribonucleic acid (DNA) strands could be employed towards calculating solution to an instance of The NP-complete Hamiltonian Path Problem (HPP). Lipton also demonstrated that Adleman’s techniques could be used to solve the satisfiability (SAT) problem. In this paper, it is demonstrated how the DNA operations presented by Adleman and Lipton can be used to construct bio-molecular parallel adder with basic logic operations in the Adleman-Lipton model.

]]>
Dec 2013
<![CDATA[Information Technology Effects on Accident Decreasing with Critical Management of Gas Life Line in Electronic City]]> Source:Computer Science and Information Technology  Volume  1  Number  4  

Mohammadreza Sadeghi Moghaddam   Maryam Nakhostin Ahmadi   and Salameh Azimi   

Computer communications were developed by industrial revolution and then by improving the network communication as an internet, a new space was made for city that it’s called virtual city or electronic city. Accidental events are a kind of critical and managing them as soon as possible is so necessary. Since electronic cities help the manager to have necessary information at critical periods, it plays a significant role for crisis management. This study aimed at developing the electronic city and gas lifelines and managing them by electronic technology. Since, lifelines are fundamental construction and commonly are linear circuit network, which can produce the people lives’ necessities. If they’re damaged several different accidents and critical will be happened. So in this article gas lifelines were examined practically with Iran conditions and several different designs against critical was presented by using information technology.

]]>
Dec 2013
<![CDATA[All-Pairs Shortest Paths Algorithm for High-dimensional Sparse Graphs]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Urakov A. R.   and Timeryaev T. V.   

Here the All-pairs shortest path problem on weighted undirected sparse graphs is being considered. For the problem considered, we propose “disassembly and assembly of a graph” algorithm which uses a solution of the problem on a small-dimensional graph to obtain the solution for the given graph. The proposed algorithm has been compared to one of the fastest classic algorithms on data from an open public source.

]]>
Nov 2013
<![CDATA[A Graph-based Overview Visualization for Data Landscapes]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Grigor Tshagharyan   and Hans-Jorg Schulz,   

In many domains, it becomes more and more common that an analysis spans various interlinked data sources that we collectively term data landscape. Yet for the selection of appropriate data sources from the wide range of available ones, current approaches and systems rarely offer more support than a File-Opendialog. This paper presents a visualization approach that aims to give a stable and meaningful overview of a data landscape to ease finding and selecting data sources that may be useful in a subsequent visual analysis. As such, it serves as a visual starting point from which to bootstrap a visual analysis by finding and selecting the data sources relevant to a question at hand. This approach is exemplified by applying it to a current snapshot of the data landscape from the CKAN-LOD data hub consisting of 216 data sources with 613 links between them.

]]>
Nov 2013
<![CDATA[Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Nader Chmait   and Khalil Challita   

The scheduling problem is one of the most challenging problems faced in many different areas of everyday life. This problem can be formulated as a combinatorial optimization problem, and it has been solved with various methods using meta-heuristics and intelligent algorithms. We present in this paper a solution to the scheduling problem using two different heuristics namely Simulated Annealing and Ant Colony Optimization. A study comparing the performances of both solutions is described and the results are analyzed.

]]>
Nov 2013
<![CDATA[A Review and Evaluation of Human Interactive Proof (HIP) Technique for Combating Malicious Automated Scripts]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Onwudebelu Ugochukwu   Uchenna C. Ugwoke   and Ifeanyi-Reuben Nkechi Jacinta   

Advances in the field of Information Technology (IT) make Information Security an inseparable part of it. In order to deal with security, authentication plays an important role. Computer Scientists have developed Human Interactive Proof (HIP) commonly known as CAPTCHAs (Completely Automated Turing Tests to Tell Computers and Humans Apart) as a challenge-response test used in computing to determine and confirm the identity of an individual requesting their services form that of malicious automated scripts. It is a security measure which uses computer programs that automatically generate and grade puzzles that most people can solve without difficulty, but that current programs cannot. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user, and not anyone else. This paper presents a brief overview of the literature in the field of CAPTCHA authentication techniques in the online environment. Furthermore, it evaluates HIP with an objective to provide insights on their lack of acceptance as well as some suggestions for further research in this field.

]]>
Nov 2013
<![CDATA[Face and Hand Shape Segmentation Using Statistical Skin Detection for Sign Language Recognition]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Bahare Jalilian   and Abdolah Chalechale   

An accurate face and hand segmentation is the first and important step in sign language recognition systems. In this paper, we propose a method for face and hand segmentation that helps to build a better vision based sign language recognition system. The method proposed is based on YCbCr color space, single Gaussian model, Bayes rule and morphology operations. It detects regions of face and hands in complex background and non-uniform illumination. This method tested on 700 posture images of the sign language that are performed with one hand or both hands. Experimental results show that our method has achieved a good performance for images with complex background.

]]>
Nov 2013
<![CDATA[Studying the Differential Problem with Maple]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Chii-Huei Yu   

This article takes the mathematical software Maple as the auxiliary tool to study the differential problem of two types of trigonometric functions. We can obtain the Fourier series expansions of any order derivatives of these two types of functions by using differentiation term by term theorem, and hence greatly reduce the difficulty of calculating their higher order derivative values. On the other hand, we provide some examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple. This type of research method not only allows the discovery of calculation errors, but also helps modify the original directions of thinking from manual and Maple calculations.

]]>
Nov 2013
<![CDATA[Ontology-based Query Processing in a Dynamic Data Integration System]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Mohammed Mahmudur Rahman   

Data integration is concerned with unifying data that share some common semantics but originate from unrelated sources. Necessarily, when we work on data integration, we must take into account a more important and complex concept called “heterogeneity”. We begin by introducing Data integration systems (DIS), Ontologies and other preliminary concepts that will be used throughout the presentation. We will discuss how can Ontologies support, Integration? Also about the Ontologies and integration problems. We define architecture used for Semantic Query Distribution and we formally discuss the problem of query rewriting inside our data integration framework. Finally, we close this paper illustrating the current problem future work.

]]>
Nov 2013
<![CDATA[Microcalcification Detection in Mamography Images Using 2D Wavelet Coefficients Histogram]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Ebrahim Jelvehfard   Karim Faez   and Afsane Laluie   

Breast Cancer is one of the most common illnesses in recent years. Diagnosing cancer at early stages can have a considerable effect on the therapy, so that many several attempts have been made for diagnosing this illness at its first stage recently. Mammography imaging is the most commonly used technique to detect breast cancer before appearing the clinical symptoms. Extracting features which facilitate cancer symptoms detection without significant decrease in sensitivity, minimizes false positives and is of great importance. Microcalcification is an important indicator of cancer. In this research a new method for detecting microcalcifications in mammography is presented. Due to the ability of wavelet transform in image decomposition and detaching details, it can be used to expose this symptom in mammograms. In this work, a two dimensional wavelet transform is performed for feature extraction; and these features are used to diagnose cancer symptoms in mammography images. After the feature extraction step, classification is done using Support Vector Machine (SVM). In the performed evaluation, Regions of Interest (ROIs) with different dimensions have been used as input data and the results show that the proposed feature extraction method can have a significant impact in improving the performance of detection systems.

]]>
Nov 2013
<![CDATA[Third Dimension Urgent Demand to Support Collaborative Learning in the Virtual Learning Environment VLE]]> Source:Computer Science and Information Technology  Volume  1  Number  3  

Ahmed Dheyaa Basha   and Satar Habib Mnaathr   

The main aim of supporting collaborative learning by multi devices such computer or Smartphone is to enable and induce students to work together. To confirm and verify their skills and knowledge by mutually interacting. And to foster their role in the social dimension. The virtual learning environment equipped with several features to support actual communication informally and to create communities. Many universities and organizations have been adopted to support distance learning. In this study we report on an experimental study that evaluated the importance of the value added by a second life platform grounded in meeting system to more collaborative learning activity, if compared to converging with the systems depended on synchronous text-based communication. The experiment outcomes explain that the adoption of a three dimension virtual learning environment does not upgrade the perceived level of repose with communication or introduce disorder during the activity, while the user perception of the majority of the features presented are positive.

]]>
Nov 2013
<![CDATA[Exact Formulas for the Average Internode Distance in Mesh and Binary Tree Networks]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Behrooz Parhami 

The average internode distancein an interconnection network (or its average distance for short) is an indicator of expected message latency in that network under light and moderate network traffic. Unfortunately, it is not always easy to find an exact value for the average internode distance, particularly for networks that are not node-symmetric, because the computation must be repeated for many classes of nodes. In this short paper, we derive exact formulas for the average internode distance in mesh and complete binary tree networks.

]]>
Sep 2013
<![CDATA[How Jordanian Youth Perceive Social Networks Influence?]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Emad A. Abu-Shanab and Heyam A. Al-Tarawneh 

Social networks (SN) hit the Arab world strongly in the last few months, where young people communicated, collaborated, and shared all sorts of information and files through Facebook and other types of SNs. It is not clear how young people perceive this type of activity and how they are influenced in their daily livings. It is important to understand how young people see Facebook and understand its advantages and disadvantages. This study utilized 206 responses from different categories mainly undergraduate students (paper survey) and Facebook users (electronic survey) and concluded that the highest perceived advantage of SN is “I can search and find new and old friends (classmates & relatives)” and the highest perceived disadvantage is “Excessive addiction”. Conclusions and future work are reported at the end.

]]>
Sep 2013
<![CDATA[GiLBCSteg: VoIP Steganography Utilizing the Internet Low Bit-Rate Codec]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Garrett Calpouzos and Cihan Varol 

GiLBCSteg is a steganography algorithm that utilizes an open source low bit-rate audio codec (iLBC) to be used in Voice over Internet Protocol (VoIP) applications. The GiLBCSteg achieves this by hiding data within the compression process of audio signals, a required step in successful live VoIP applications. Specifically, GiLBCSteg is the first study in the literature that alters the linear spectral frequencies indices used by iLBC to encode hidden data within the audio signal. This encoding algorithm not only produces mild distortion, less than 5 dB in difference which is not readily noticeable to the human ear, but also do not transmit the hidden message as a separate file.

]]>
Sep 2013
<![CDATA[Factors Influencing Group Decision Making Performance in a GSS Enabled Environment]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Rawan T. Khasawneh and Emad A. Abu-Shanab 

Group decision making is becoming a very common activity in human society all over the world. Business problems nowadays are increasingly involving interactivity that requires collective effort and detailed information shared by a group of people working together. This paper is concerned with measuring the impact of group members’ gender and familiarity on group decision making performance. An experimental approach is adopted where a sample of students, who study at Yarmouk University in Jordan, performed a specific task using a GSS enabled environment. Results revealed that female-only groups have a better performance than male-only groups and groups with familiar members have a better influence on the quality of decisions made during group work. Conclusions, contributions and future work are reported at the end.

]]>
Sep 2013
<![CDATA[On the Problem of Modeling of the Relations between Documents]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Yu.V. Leonova and V.B. Barakhnin 

The problem of an adequate description of various aspects of the inter-related information is addressed when avoiding of duplication of information is required. We are dealing with difficult documents where one document is a part of another document. The proposed solution to this problem lies in the specific way of storing the essential information. Each fact related to the specific identity or some property of this identity is stored in a single document that establishes that particular property when dealing with the relations between documents in the ''many-to-many'' manner. A model for the directed links between documents is arranged by establishing a set of documents specifying binary relations with additional attributes. It actually transfers the built relations to the level that defines the structure of documents.

]]>
Sep 2013
<![CDATA[To the Question of Fuzzy Evaluation of Quality of Trainees Knowledge in the System of Distance Learning]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Usmanov R.N. and Khamidov V.S. 

The technique of computational experiments to assess the quality of the educational process in the distance learning systems based on fuzzy-set approach.

]]>
Sep 2013
<![CDATA[Generation of Questions Sequences in Intelligent Teaching Systems Based on Algebraic Аpproach]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Alexander Zuenko Alexander Fridman and Boris Kulik 

The paper describes an approach to development of question-and-answer teaching systems based on controlled languages and algebraic models for representation and processing of question-and-answer texts. We propose using a partial order relation "question-subquestion" to build an individual trajectory of teaching. To model a strategy of examination, we use defeasible reasoning formalized within our earlier developed QC-structures.

]]>
Sep 2013
<![CDATA[Detection and Management System of Digitized Images of Fingerprints]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Angélica González José Gómez Miguel Ramón and Luis García 

This article describes the computer system developed for the detection of minutiae in fingerprints images. The system described can perform automatic and manual extraction of important data from the fingerprints images, storing that information in the internal database. The statistical calculations, including calculations for cumulative frequency analysis, generated by the system are very important for calculating distinction rates. The system is also capable to differentiate by male or female gender, finger, fingerprint type and sector that has been divided the fingerprint.

]]>
Sep 2013
<![CDATA[Descriptive Study of 3D Imagination to Teach Children in Primary Schools: Planets in Outer Space (Sun, Moon, Our Planet)]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Satar Habib Mnaathr and Ahmed Dheyaa Basha 

This paper tries to support young children regarding their understanding about planets such as Sun, Moon and our planet by using technology, particularly through three-dimensional virtual environments, which whom are in primary school at the second stage, can have an early understanding about the shape of the Sun, Moon, and Earth, and the relationship among them, how the moon seems to change shape and how it occurs, phenomenon of day and night, through the use of vision of 3D holograms. 3D hologram vision system may be an effective tool for the teaching on various topics, like astronomy, which interpreted the relationships between objects in Aerospace. Presently, From other hand the Indiana state standards toward science education does not refer or suggest to the learning of this area and concepts for phenomenon of astronomy clearly, before fourth stage. So far, we expect our findings indicates that the students can learn from the previous these concepts in their learning lives with the implementation of these technologies. In addition the expect that these techniques could happen revolution and a quantum leap in education and raising the ability and cognition in children regarding in the understanding of scientific topics.

]]>
Sep 2013
<![CDATA[Developing an Expertise Interaction Meta-Model for Group Decision Support System (GDSS) ]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Christina Albert Rayed 

Group Decision Support System (GDSS) provides a group electronic environment in which managers and teams can collectively make decisions and design solutions for unstructured and semi-structured problems. In this paper, we propose to model for group decision support system based on expertise interaction meta-model. In this paper we develop a specialized systems such as Management Information System (MIS), Decision Support Systems (DSS), and Executive Information Systems (EIS) to be talked about work with these systems and technologies for data mining and Online Analytical Processing (OLAP) and the role of knowledge-based DSS should be to allow experts to broaden and expand their expertise, not to narrow it down to focus on the specific decision needs of managers and employees. Expert System (ES) with knowledge base captured from numerous experts in the same subject area as well as from a variety of specialists in international financial management, international accounting, international tax areas, and so forth.

]]>
Sep 2013
<![CDATA[Multiuser Message Authentication, Application to Verifiable Secret Sharing and Key Management Schemes]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Aparna Ram and B.B. Amberker   

Providing authentication for the messages exchanged among a group of users is an important issue in secure group communication. We develop multiuser authentication schemes with perfect protection against colluding malicious users numbering fewer than k, where all the n users are allowed to be senders (simultaneously with being receivers). In our scheme each user is required to store secret information of size 2k log2 q1 bits, and tags to authenticate messages are of length k log2 q. We use this to obtain, in the setup in which the participants are allowed to employ previously distributed private keys, a non-interactive verifiable secret sharing scheme for multiple dealers, in which shares reveal no information about the secret, and dealers cannot deal inconsistent shares. We also provide authentication to the group key management schemes proposed by Blundo et al. and Fiat-Naor without incurring extra storage cost.

]]>
Sep 2013
<![CDATA[Using Parallel Filtering Algorithms to Solve the 0-1 Knapsack Problem on DNA-based Computing]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Sientang Tsai 

It is shown first by Adleman that deoxyribonucleic acid (DNA) strands could be employed towards calculating solution to an instance of the NP-complete Hamiltonian Path Problem (HPP). Lipton also demonstrated that Adleman’s techniques could be used to solve the satisfiability (SAT) problem. In this paper, it is demonstrated how the DNA operations presented by Adleman and Lipton can be used to develop the DNA-based algorithm for solving the 0-1 Knapsack Problem.

]]>
Sep 2013
<![CDATA[A Bayesian Network Decision Support System for Order Management in New Product Development]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Hamed Fazlollahtabar and Mina Ashena 

Recently, the way firms enter to markets is influenced by the customer requirements applying product improvement. Thus, market-driven product design and development is now a favorite research topic in the literature. Order management prediction for a new product help firms to overcome the future uncertainties. Here, we propose a decision support system for customers’ order management in a new product development process. The drawback of complicated decision support problems are the complexities involved in interpreting causal relationships among decision variables. Therefore, Bayesian Network (BN) has shown excellent decision support competence due to its flexible structure allowing itself to extract appropriate and robust causal relationships among target variable and related explanatory variables. We make use of a decision support BN as a prediction aid for order management in a new product development process.

]]>
Sep 2013
<![CDATA[Identity Management in the Internet of Things: the Role of MANETs for Healthcare Applications]]> Source:Computer Science and Information Technology  Volume  1  Number  2  

Caroline Chibelushi Alan Eardley and Abdullahi Arabo 

The Internet of Things (IoT) in healthcare and medical applications promises to solve many problems which are currently challenging the sector, ranging from remotely caring for our aging population to medical discoveries on incurable diseases. However, the challenges and risks of having 50 billion Things and users connected together are complex and may be detrimental to control. Concerns have been raised in relation to the IoT Identification Management (IDM) issues, in which each Thing (user and device) will be required to have a unique identity, and the IDM to be able to distinguish between a device and user, as well as ensuring identity and information context safety. This paper examines the underlying issues behind IDM and proposes a framework which aims to achieve the identification of Things and their safe management. The IDM framework is embedded in Mobile Ad Hoc Networks (MANETs) and assumes that most healthcare devices will be linked wirelessly and in mobile environments. The paper aims to open a research debate which will help to solve the future IoT IDM issues in healthcare applications.

]]>
Sep 2013
<![CDATA[Creating Pathway for Enhancing Student Collection of Academic Records in Nigeria-a New Direction]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Ugochukwu Onwudebelu Sanjo Fasola and Eweje Oluwafemi Williams 

Education is priceless and every academic institution needs to be very serious about its management and administration in order not to mess around with the students' academic headway. Therefore, there is need for them to significantly improve and incredibly convenient arrangement by which former students receive final academic transcripts. It is indeed a nightmare of unimaginable proportions to attempt to get academic records from some universities, especially in recent times. This is as a result of the administrative heedlessness of some record officials in the university. A request for an undergraduate transcript means the requestor sometime is expected to wait longer than expected with the hope of getting his results. This paper proposes a Transcript Xpress which will open new and convenient routes to key educational solutions to information, providing application service delivery and ensure that proper records are kept in tact. This will improve the university's efficiency and effectiveness through the provision of high quality information system and services. Transcript Xpress will be the catalyst for change through improved educational solutions.

]]>
Jul 2013
<![CDATA[ProtoCloud: A Cloud based Desktop]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Imran Ali Mirza Amiya K. Tripathy Vishwanath Sarang Ameya Joshi and Shefali Shah 

In this work an attempt has been made to implement a Desktop Service that is accessible on a web browser and a few basic applications which aim at enhancing portable environments using concepts of cloud computing, thus providing users an efficient and portable access to an Online Operating System. This proposed system deals with taking the concept of Operating Systems to the Web. It aims to combine small to medium-scale applications and services into a standalone Operating System, wherein applications and services live and run on the internet instead of on the hard-disk.

]]>
Jul 2013
<![CDATA[Dynamic Key Generation During a Communication Instance Over GSM]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Joseph Zalaket and Khalil Challita 

The use of mobile phones became vital in our everyday life. This emergence has led many companies to allow new activities which were previously running strictly over the Internet to run over the mobile network such as the electronic payment. These circumstances make the security of mobile communication a priority to preserve the authentication, confidentiality and integrity of data sent between subscribers and mobile network. In this paper, we propose a dynamic key generation for the A5 GSM encryption algorithm to enforce the security and protect the transferred data. The improvement we made here is the generation and use of multiple encryption keys during a single phone communication, which makes it much harder for a cryptanalyst to eavesdrop on a phone conversation. Note that our algorithm can be implemented over any GSM generation GSM/3G/4G.

]]>
Jul 2013
<![CDATA[The Future of Virtual Environments: The Development of Virtual Technology]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Hiu-fai Lau Kung-wong Lau and Chi-wai Kan 

Although the application of virtual technology is experiencing an exponential growth during the last three to four decades, researchers still did not catch up with a clear development direction of virtual technology. There is an urge to seek the direction of virtual technology development in the future. Based on previous studies of VR and the virtual technology development process, this study aims to investigate: (1) if the direction of virtual technology development changed throughout the long process or it remained consistent over time; and (2) if there was predictable direction of virtual technology development. To understand the phenomenon, this research studies the events of virtual technology development process diachronically from 1950s to present. Our team presents findings from the development process describing four periods of virtual technology development, namely telepresence, interactivity, connectivity and synthesis. The four period of development could describe the condition of the virtual technology development. These relationships are situated within the historical contexts of the virtual technology development. With the analysis of virtual technology development in historical context, this article forecasts the future of VR development. The findings will contribute to future creation of stereoscopic virtual worlds.

]]>
Jul 2013
<![CDATA[Detecting Cloaking Web Spam Using Hash Function]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Shekoofeh Ghiam and Alireza Nemaney Pour 

Web spam is an attempt to boost the ranking of special pages in search engine results. Cloaking is a kind of spamming technique. Previous cloaking detection methods based on terms/links differences between crawler and browser's copies are not accurate enough. The latest technique is tag-based method. This method could find cloaked pages better than previous algorithms. However, addressing the content of web pages provides more accurate results. This paper proposes an algorithm, working based on term differences between crawler and browser's copies. In addition, dynamic cloaking, which is a new and complicated kind of cloaking, is addressed. In order to increase the speed of comparison, we introduce hash value, calculated by Hash Function. The proposed algorithm has been tested with a data set of URLs. Experimental results indicate that our algorithm outperforms previous methods in both precision and recall. We estimate that about 9% of all URLs in data set utilize static cloaking and about 2% of all URLs utilize dynamic cloaking.

]]>
Jul 2013
<![CDATA[Conceptual Database Retrieval through Multilingual Thesauri]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

E. Petraki C. Kapetis and E. J. Yannakoudakis 

In traditional database management systems, information retrieval is often carried out using keywords contained within fields of each record. Because a term (concept) can be expressed in several ways, a significant number of records are ignored by the free text techniques which use only a posteriori relations between terms. This paper proposes the utilisation of a priori conceptual relations between terms that exist independently of any documents through a controlled vocabulary known as thesaurus, which incorporates both terms and the conceptual relations among them. The paper discusses the integration of multilingual thesauri in the set-theoretic FDB (Frame DataBase) data model, which offers by default a universal schema for all applications. All changes to the structure of the logical-level database schema can be carried out by modifying the appropriate metadata. The purpose of this extension is for the database user to be able to apply queries on a database using information through multilingual thesauri. This approach extends the FDB model so that users can apply queries to the database using both a priori and a posteriori relationships. Apart from free text retrieval and "conceptual searching" the proposed structure enables multilingual searching independently of the language used to store data itself.

]]>
Jul 2013
<![CDATA[Dynamic Architectural Framework for Cloud Computing]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Hassan Reza and Nitin Karodiya 

Cloud Computing has been a buzzword for quite a while. A lot of companies are offering cloud infrastructures and services, which can be used by organizations and individuals at a nominal charge. There are various cloud providers in the market. But, when it comes to interoperability or quality attributes of the different types of cloud services by different cloud providers, there is no consensus on standards. The other but yet important issue is there is no framework, which can define the quality attributes of a cloud and measure them. Different companies have used different architectural patterns for implementing their cloud. But, none of them have proved to provide a good balance of the quality attributes such as Greenability, Availability, Security, Reliability, Performance, Portability or Interoperability etc. In this paper, we proposing an architectural framework for defining and measuring the quality attributes of a cloud and to customize these quality attributes to satisfy the quality requirements of different subscribers, which can make the cloud more usable and adoptable by reducing the cost and increasing the profit for the cloud vendors. It also makes it possible for the same cloud to behave differently for different subscribers according to their needs.

]]>
Jul 2013
<![CDATA[Designing an Intelligent Warehouse Based on Genetic Algorithm and Fuzzy Logic for Determining Reorder Point and Order Quantity]]> Source:Computer Science and Information Technology  Volume  1  Number  1  

Esmail Khanlarpour Hamed Fazlollahtabar and Iraj Mahdavi 

We develop a model to determine the reorder point and the optimal order quantity in a warehouse. Due to the maintenance costs, it is better to consider the order time period and the order quantity to prevent excessive inventory storage while reducing the cost on the other hand. In our study, we employ fuzzy logic as time decision aid and Genetic Algorithm for optimal order quantity. The decision making process is performed in an application software designed for a diary firm.

]]>
Jul 2013