Accepted Papers


  • Routing and Tracking System for Buses
    Ahmed Ahmed1, Elshaimaa Nada1 and Wafaa Al-Mutiri3,1,2Zagazig University,Egypt,3Taibahu University,Saudi Arabia

    ABSTRACT

    This paper proposes development of an android app to improve the transportation services for bus rental companies that lift Taibah University students. It intends to reduce the waiting time for bus students, thereby to stimulate sharing of updated information between the bus drivers and students. The application can run only on android devices. It would inform the students about the exact time of arrival and departure of buses on route. This proposed app would specifically be used by students and drivers of Taibah University. Any change in the scheduled movement of the buses would be updated in the software. Regular alerts would be sent in case of delays or cancelation of buses. Bus locations and routes are shown on dynamic maps using Google maps. The application is designed and tested where the users assured that the application gives the real time service and it is very helpful for them.

  • Selection of the Best Despeckle Filter of Ultrasound Images
    Ghada N.H.Abd-ElGwad and Yasser M.K. Omar,Arab Academy for Science Technology and Maritime Transport, Cairo, Egypt

    ABSTRACT

    Ultrasound imaging (sonography) is considered as the largest medical imaging modalities. The speckle noise in ultrasound image degrades the quality of ultrasound. While there are dissimilar despeckle noise techniques to remove noise however they are not efficient with all images. In addition, the physician will not be able to select the best technique manually. The four despeckle noise techniques are a linear filter, a non-linear filter, a diffusion filter, and wavelet filter. This paper implements these techniques on a specific dataset. The results are evaluated based on the expertise opinion. Moreover, a comparison is conducted between the expertise opinion and the extracted features from both original and despeckles images. We apply parallel coordinate to visualize the extracted features before and after applying best despeckle techniques to know the dominant features for choosing the suitable technique. The results show there are dominant features like contrast, correlation, entropy, mean and variance. These features are important for automatic selecting despeckle noise.

  • Customized Mechanism for Cloud Based Data Sharing Over Intranet – Study Based on Survey Conducted for Pakistani Business Industry
    Qazi Shahab Azam1 and Muhammad Zulqarnain Siddiqui2,1Iqra University,Pakistan2Malaysia University of Science and Technology, Malaysia

    ABSTRACT

    Open distributed storage gives access to the clients of an association to store and share their information. Since all the execution of administrations is done by means of web, an association have absence of control over the information or the system, now the inquiry emerges here that how the association get guarantee the confirmation of the security of their information and that won't be gotten to by another person that are not authentic proprietor furthermore execution will be affected in the event that somebody got access.Iterative procedure model offers separate structure for the improvement of use in which one can break an extensive application into little modules. In Iterative improvement, source code is planned and created and tried in rehashed cycles until it is in appropriate working structure then can be conveyed to clients. With every cycle new components can be included as well.

  • Checking Behavioural Compatibility in Service Composition with Graph Transformation
    Redouane Nouara and Allaoua Chaoui,Laboratory MISC University Abdel Hamid Mehri Constantine 2,Constantine Algeria

    ABSTRACT

    The success of Service Oriented Architecture (SOA) largely depended on the success of automatic service composition. Dynamic service selection process should ensure full compatibility between the services involved in the composition. This compatibility must be both on static proprieties, called interface compatibility which can be easily proved and especially on behavioural compatibility that needs composability checking of basic services. In this paper, we propose (1) a formalism for modelling composite services using an extension of the Business Process (BP) modelling approach proposed by Benatallah et al. and (2) a formal verification approach of service composition. This approach uses the Graph Transformation (GT) methodology as a formal verification tool. It allows behavioural compatibility verification of two given services modelled by their BPs, used as the source graph in the GT operation. The idea consists of (1) trying to dynamically generate a graph grammar R (a set of transformation rules) whose application generates the composite service, if it exists, in this case (2) the next step consist in checking the deadlock free in the resulting composite service. To this end we propose an algorithm that we have implemented using the AGG, an algebraic graph transformation API environment under eclipse IDE.

  • Using OpenCL with GPU to Accelerate Local Tone Mapping for High Dynamic Range Images
    Kuo-Feng Liao and Yarsun Hsu,National Tsing Hua University, Hsinchu, Taiwan

    ABSTRACT

    Tone mapping has been used to transfer HDR (high dynamic range) images to low dynamic range. This paper describes an algorithm to display high dynamic range images. Although local tone-mapping operator is better than global operator in reproducing images with better details and contrast, however, local tone mapping algorithm usually requires a huge amount of computation and it takes a long time to display an HDR image. We have designed a highly parallel method using Graphics Processing Unit (GPU) to accelerate the computation in order to achieve a real-time display. The algorithm can be highly parallelized. In order to run on different heterogeneous systems, we choose OpenCL, instead of CUDA, for our implementation. We have demonstrated the speed-up can be as high as 63 times for a 1280x960 image.

  • IPv6 Performance Analysis, Transition Strategies and Considerations for VoIP Deployment
    Paschal A.Ochang1, Phil Irving2,1Federal University Lafia,Nigeria,2University of Sunderland,England

    ABSTRACT

    The transition into IPv6 is gradually evolving due to the numerous advantages it offers, but integration with popular telephony applications like Voice over Internet Protocol (VOIP) has been lengthy. The implementation of VOIP technologies by organisations has been rapid due to the numerous advantages it offers in terms of low cost and the ability to transfer voice over an internet protocol network, but most of the implementation has been with IPv4 due to the fact that IPv4 is already implemented by most networks. This article looks at the support IPv6 provides for VOIP based on the enhanced features it possesses while keeping in view the performance analysis of VOIP deployment with IPv6 and other characteristics that affect their integration. Furthermore the paper analyses different deployment strategies that can be used to deploy and implement VOIP with IPv6 and how to maintain interoperability between VOIP IPv6 networks and VOIP IPv4 networks.

  • Requirements Inconsistency Detection Using Extreme Programming
    M. Khlaif, R. Awami and R. Abudalal

    ABSTRACT

    TheMain objective of the software development methodology is to achieve customer need. Requirement Engineering is the process to achieve such needs the overall project success failure depends on the user requirements. There are many factors that lead to requirement inconsistency such as voluminous requirement , changing requirements, complex requirements , conflicting Stakeholder requirements. In this paper we propose a method for Inconsistency detection by using Extreme Programming methodology (XP), via the applications of developed Algorithm, which implementing , detection process, Inconsistency detection process , Inconsistency Locating process,And process of broken rules types.

  • Level-Based Twitter News Recommendation Scheme for Steam Education
    Yongsung Kim, Seungwon Jung, Seonmi Ji and Eenjun Hwang,Korea University, Seoul, Korea

    ABSTRACT

    In recent years, STEAM education has been increasingly used as a new educational method in various subject classes. In particular, STEAM education using recent science/technology contents can become an attractive educational method to the digital generation learners since they are familiar with utilizing recent science/technology contents on the SNS. One important requirement for improving the effect of STEAM education is to provide appropriate contents to the learners. Considering the large volume of SNS data, it is difficult for the instructor to identify appropriate educational materials in person. Therefore, in this paper, we propose a method for automatically recommending the science/technology news at the learner's level for STEAM education. To show the effectiveness of our scheme, we implemented a prototype system. We present some of the experimental results.

  • Mean Reversion with Pair Trading in Indian Private Sector Banking Stocks
    Umesh Gupta, Sonal Jain and Mayank Bhatia,JK Lakshmipat University,India

    ABSTRACT

    Company Stock is the integral part of the market evaluation of the company’s performance in the market. In this paper, the correlation and mean reverting behaviour of various stocks of Banking (Private Banks) from Indian Stock market have been examined. Five Private Sector Banks (ten combination/pairs among them) were selected for the study. Along with the correlation test, Augmented Dickey Fuller Test is conducted to test whether the time series follows the mean reverting behaviour. It has been found that three pairs from banking sector were negatively correlated and that high degree of correlation does not necessarily result in mean reversion between two time series.

  • A Novel based Approach for Liaison Analysis in Data Summarization and Deep Web Interface Data Extraction
    1N.Raghavendra Sai and 2 K.Satya Rajesh,1 Bharthiar University,2SRR& CRR Govt College,India

    ABSTRACT

    World Wide Web is developing rapidly; there are large number of Web databases available for users to access. This fast development of the World Wide Web has changed the way in which information is managed and accessed. So the Web can be divided into the Surface Web and the Deep Web. Surface Web refers to the Web pages that are static and linked to other pages, while Deep Web refers to the Web pages created dynamically as the result of specific search . In the same way the Tweets are being created as short text message. Tweets are shared for each users and knowledge analysts. Twitter that receives over four hundred million tweets per day has emerged as a useful supply of reports, blogs, and opinions and additional. In general, tweet summarization and third to observe and monitors the outline - based mostly and volume based variation to supply timeline mechanically from tweet stream. Implementing continuous tweet stream reducing a text document is but not an easy task, since an enormous range of tweets are paltry, unrelated and raucous in nature, because of the social nature of tweeting. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. In this paper, we have a tendency to introduce a unique summarization framework known as summarization (continuous summarization by stream clustering) and also propose a two-stage framework, namely Smart Crawler, for efficient harvesting deep web interfaces. In the first stage, Smart Crawler performs site based searching for center pages with the help of search engines, To achieve ranks websites to prioritize highly relevant ones for a given topic. In the second stage, Smart Crawler achieves fast in-site searching by excavating most relevant links with an adaptive link-ranking crawlers.

  • Comparison on Two Image Features Weight Adjustment Effect to Relevance Output Image Retrieval
    Petcharat Pattanasethanon,Rajamangala University Technology of Thanyaburi,Thailand

    ABSTRACT

    This research compares the two features within an images, namely, colour and edge characteristic and adjusting weight ratios from colour weight value 1, edge weight value 1, and 3 colour to edge weight ratios; 0.7 to 0.3, 0.5 to 0.5, and 0.3 to 0.7 to retrieved the most relevance query outputs. Based on the results, Colour to edge weight ratio at 0.5 to 0.5 retrieved the most relevance query images outputs on RGB colour models. The recall value and accuracy remained robust while F-measurements is fair. The HSV model evaluation was fair on both recall value and accuracy, but F-measure signified revision.

  • EFFECT OF LANGUAGE ON SPEAKER VERIFICATION PERFORMANCES
    Asma Chayeh1, and Leila Beltaifa-Zouari2, 1National School of Engineering of Sousse, Sousse, Tunisia, and2 National School of Engineering of Carthage, Tunis, Tunisia

    ABSTRACT

    Recent results have shown the impressive gains in performances of language verification. In this paper, we report three databases of Arab language. Each database consists of over 8 minutes of speech from Tunisian, Algerian and Moroccan speakers covering the diversity of dialect spoken in Maghreb. Speech samples are collected from the TV show Ness Nessma News. We have developed the three phases of speaker recognition system: the feature extraction of MFCC, speakers modeling with GMM, and finally the step of decision. The speaker verification system was implemented in MATLAB using training data and test data stored in WAV files. The system was evaluated on TIMIT speech data base then on the Arab databases to test the robustness of the speaker verification system and discuss the impact of dialect variability of English and Arabic languages.

  • Predicting venues in location based social network
    Omar F.Almallah1, and Songül Albayrak2, 1&2 Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey.
    ABSTRACT

    The circulation‬ of the social networks and the evolution of the mobile phone devices has led to a big usage of location based social networks application such as Foursquare, Twitter, Swarm and Zomato on mobile phone devices mean that huge dataset which is containing a blend of information about users behaviour’s, social society network of each users and also information about each of venues, all these information available in mobile location recommendation system .These datasets are much more different from those which is used in online recommender systems, these datasets have more information and details about the users and the venues which is allowing to have more clear result with much more higher accuracy of the analysing in the result. In this paper we examine the users behaviour’s and the popularity of the venue through a large check-ins dataset from a location based social services, Foursquare: by using large scale dataset containing both user check-in and location information .Our analysis expose across 3 different cities.On analysis of these dataset reveal a different mobility habits, preferring places and also location patterns in the user personality. This information about the users behaviour’s and each of the location popularity can be used to know the recommendation systems and to predict the next move of the users depending on the categories that the users attend to visit and according to the history of each users check-ins.