The invited talk

Blazewicz Jacek
Poznan University of Technology, Institute of Computing Scienece, Poland

Topic: GPU and Cloud Based Genome Sequencing Algorithm

Abstract :

Sequencing has recently become a primary method used by life scientists to investigate biologically relevant problems related to genomics. As modern sequencers can only read very short fragments of the DNA strands, an algorithm is needed to assemble them into the original sequence. We propose a new algorithm based on this classical approach, but being able to accurately handle large data sets coming from next generation sequencing machines. We start with an unique way to construct the DNA overlap graph model by employing the power of alignment-free sequence comparison. The novelty of our solution lies in a special sorting technique that puts similar sequences close to each other without performing the sequence alignment. This phase is very fast and serves as preselection of similar pairs of sequences. Then two highly parallelizable steps proceed. Firstly, an ultra fast exact sequence comparison verifies previously selected candidates, resulting in very accurate results. The high performance computations employing both multiple CPUs and GPUs make the method very efficient even for large data sets. Secondly, having the DNA graph, the algorithm goes on to traverse it in a parallel way to obtain so called contigs and scaffolds, i.e. long fragments of reconstructed genome. Again the approach is novel, as resulting contigs are precisely cut in places where the repetitive fragments are detected.


Bouvry Pascal
Data and Knowledge Engineering Lab, School of Information Technology (SIT)
King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok, Thailand

Topic: Cloud-Service-Providers' Ranking method

Abstract :

Control Chart Patterns (CCPs) can be considered as time series. They are used in monitoring the control process, therefore an ability to recognize these patterns are essential in manufacturing as abnormalities can then be detected at early stage. Feeding CCPs directly to classifiers has proven unsatisfactory, especially in existence of noises. Therefore, there have been different kinds of preprocessing of CCPs to aid the classification. This research has two main objectives, first, is to study how the lengths of CCPs affect the performance of the classification. Second, the study attempts to determine the most suitable preprocessing techniques that have been applied to CCPs. Three preprocessing techniques are selected, these are Kalman filter, statistical features and symbolic representation known as Symbolic Aggregate ApproXimation (SAX). The Minimum Descriptive Length (MDL) algorithm for selecting SAX parameters is also investigated. Neural network is chosen as the tool to implement classifiers. The study concludes that longer patterns are more preferable than shorter ones. Statistical features is found as the best preprocessing techniques.


Grégoire Danoy
Parallel Computing and Optimisation Group, University of Luxembourg, Luxembourg

Topic: A Coevolutionary Approach for the Cloud Brokering Optimisation Problem

Abstract :

In the past years, cloud computing has seen an exponential growth. Finding the right set of cloud services among the numerous Cloud Service Providers (CSPs) that will best fulfll one customer's needs has thus become an arduous task. 

Cloud service brokers (CSBs) propose to assist customers in selecting the best services according to some criteria, e.g. cost and Quality of Service (QoS). The corresponding Cloud Brokering Optimisation (CBO) problem has given rise to novel models, including a variant of the Internet Shopping Optimization Problem (ISOP). As for the original problem, the NP-hardness of the CBO problem motivates the development novel efficient optimisation methods.

This work proposes to tackle the CBO problem with a Coevolutionary Genetic Algorithm (CGA). Contrary to standard Evolutionary Algorithms (EAs) that evolve one population of homogeneous individuals representing the global solution, coevolution considers several subpopulations representing subparts of the global solution that either compete or cooperate. Such a decomposition is proposed for the CBO problem and the corresponding CGA performance is experimented on a set of CBO benchmark instances.

Biography

Grégoire Danoy received his Industrial Engineer degree in Computer Science from the Luxembourg University of Applied Sciences (IST) in 2003. He obtained his Master in Web Intelligence in 2004 and his PhD in Computer Science in 2008 from the Ecole des Mines of Saint-Etienne, France. Since 2008, Dr. Danoy is Research Scientist in the Computer Science and Communications research unit (CSC) of the University of Luxembourg. His current research interests include nature inspired algorithms and multi-agent systems for tacklng telecommunications, mobile networks, bioinformatics, high performance and cloud computing problems. Dr Danoy has published more than 50 research articles in the eld and co-authored one book on evolutionary algorithms for mobile ad hoc networks (Wiley).


Emmanuel Kieffer
University of Luxembourg

Topic:On Bi-level programming for the Cloud Brokering Optimisation

Abstract :

A Cloud Service Broker (CSB) is defined by the International Organization for standardization as a "cloud service partner between Cloud Service Customers (CSCs) and Cloud Service Providers (CSPs)". It selects the best services in terms of costs and quality to provide CSCs the best cloud computing solutions. Given the large number of providers and services, this task becomes hard and needs new Optimization solutions.

Some new models have been already proposed through the Internet Shopping Optimization Problem (ISHOP) which may be easily adapted to cloud brokering problems. Nevertheless, it is worthwhile to mention that those models only consider a single-level of decisions and suppose that CSBs and CSCs decide as a single entity. Bi-objective optimization may be used in such cases but game theory aspect would be lost. This is the reason why bi-level programming may be recommended. Indeed bi-level situations consist of two decision makers who control their own set of variables. The first decision maker referred to as the leader takes a decision which restricts the decision of the second decision maker referred to as the follower. In response to it, the follower will try to react optimally to the leader's decision. This modelling pattern may lead to collaboration or competition between them. Furthermore a bi-level strategy is more realistic since it does not overestimate the objective fitness when several decision makers may have an impact on each other. In the context of Cloud Brokering Optimization, CSBs want to guide the customer's decisions to maximize its profits while CSCs try to lower their costs. This last example can be reproduced using different objective functions (e.g. security aspects, green computing indicators).


Kliazovich D.
University of Toronto, 27 King's College Circle, Toronto, M5S 1A1, Canada

Topic: Performance and Energy Efficiency of Cloud Computing Communication Systems

Abstract :

In this paper we examine the reliability of subjective rating judgments along a single dimension, focusing on estimates of technical quality produced by integrity impairments and failures (non-accessibility, and non-retainability) associated with viewing video. There is often considerable variability, both within and between individuals, in subjective rating tasks. In the research reported here we consider different approaches to screening out unreliable participants. We review available alternatives, including a method developed by the ITU, a method based on screening outliers, a method based on strength of correlations with an assumed “natural” ordering of impairments, and a clustering technique that makes no assumptions about the data. We report on an experiment that assesses subjective quality of experience associated with impairments and failures of online video. We then assess the reliability of the results using a correlation method and a clustering method, both of which give similar results. Since the clustering method utilized here makes fewer assumptions about the data, it may be a useful supplement to existing techniques for assessing reliability of participants when making subjective evaluations of the technical quality of videos.


Malgorzata STERNA
Poznan University of Technology, Institute of Computing Science, Poland

Topic : Late Work On-Line Scheduling for Order Shipping in Internet Shopping Optimisation Problem

Abstract :

We study the scheduling problem on parallel identical machines with a common due date and the total late work criterion. The late work criterion estimates the quality of a solution based on the duration of late parts of jobs. Jobs appearing in the system have to be scheduled on machines, preferably before the given due date, in order to minimize their late parts.

In the offline case all jobs are known in advance, while in the online case they arrive in the system one by one. Late work scheduling finds many practical applications. Among others, the late work criterion can be used to optimize the process of shipping orders in the Internet Shopping Optimization Problem. To ship an order, a worker (modeled by a machine) has to collect and pack all ordered items. This task is represented by a job. Usually orders are shipped in batches (e.g., loaded on the same vehicle), for which shipping dates are defined. The shipping date is represented by a common due date, defined for a given group of orders (i.e., a set of jobs).

The goal is to minimize the size of orders which are not ready for shipping at the required time. The late work criterion has not been studied in the online mode so far. Thus, the analysis of the online problem was preceded by the analysis of the offline problem, whose complexity status has not been formally stated in the literature yet. We proved the binary NP-hardness of the offline case for two identical machines by showing the transformation from the partition problem and proposing the pseudopolynomial time dynamic programming algorithm. Then, we proposed an online algorithm for an arbitrary number of machines. We determined its competitive ratio (i.e. the upper bound of the distance between the optimal offline solution and any online solution). Moreover, we proved the optimality of this algorithm for two machine case by showing the equality of its competitive ratio and the lower bound of a competitive ratio of any online algorithm.


Jakub Marszalkowski
Poznan University of Technology, Institute of Computing Scienece, Poland

Topic : Internet Shopping Optimization Problem with budget constraints

Abstract :

Internet Shopping Optimization Problem is a problem where a customer wants to buy a set of products online, choosing them from many available shops, paying costs of items but also necessary delivery costs. New version of this problem will be introduced, where within the budget constraints, the user wants to receive a maximal number of items or maximal combined perceived value of the items. This way, an incomplete order realization is allowed. The problem resembles or even is a generalization of some other well-known optimization problems like the Multiple Knapsack Problem (MKP) or the Maximum Coverage Problem (MCP). Mathematical formulation of the problem will be presented and computational complexity will be analyzed. Finally, an efficient algorithm for solving the problem will be described.


 
Dr.Mathura Prasad Thapliyal
Department of Computer Science, School of Engineering & Technology, HNB Garhwal University, Srinagar (Garhwal) Uttarakhand

Topic: Digital India and Smart Cities : Smarter solution for Better Tomorrow
 
Abstract : Digital Technologies which include Cloud Computing and Mobile Applications have emerged as catalysts for rapid economic growth and citizen empowerment across the globe. Digital India is a Programme to prepare India for a knowledge future. The existing/ ongoing e-Governance initiatives would be revamped to align them with the principles of Digital India. The vision of Digital India is to transform the country into a digitally empowered society and knowledge economy. It would ensure that government services are available to citizens electronically. It would also bring in public accountability through mandated delivery of government’s services electronically. Present talk will focus on 3 Key Areas : Digital Infrastructure as a Utility to Every Citizen , Governance & Services on Demand , Digital Empowerment of Citizens. Second part of talk will discuss Smart Cities. The Smart Cities Mission is an innovative and new initiative by the Government of India. Government of India had announced his vision to set up 100 smart cities across the country. Since then a race has been on among cities to land on the list that the ministry of urban development is compiling. The 100 smart cities mission intends to promote adoption of smart solutions for efficient use of available assets, resources and infrastructure.

 

Biography

Mathura Prasad Thapliyal is an Indian science educator. His areas of specialization are human-computer interaction, software engineering, data mining, MIS, and e-learning. He has had experience in teaching for 19 years. He also has been an editorial board member, reviewer board member, as well as advisory/programme committee member in several conferences and journals. For examples:

  1. Electronic Journal of E-Learning (EJEL)
  2. Electronic Journal of E-Governance (EJEG)
  3. International Journal of Computer Science and Information Security (IJCSIS)
  4. IASTED International Conference on Human-Computer Interaction (HCI), March 17-19, 2008 Innsbruck, Austria
  5. HCI International Beijing, China, 22-27, July 2007
  6. Eleventh IFIP TC-13, International Conference INTERACT-2007
  7. International Conference on Emerging technologies and Applications in Engineering, technology & Sciences (ICETAETS-2008) organized by Department of Computer Science, Saurashtra University, Rajkot

Furthermore, he also involved as a committee member of professional associations such as Computer Society of India (CSI), Indian Science Congress, Member of Society for Information Science, and Member of Indian Society of Information Theory and Applications (ISITA). For the experience in nomination at international level, he was selected in Who’s Who of Professional listed in 2005-2006 edition of International Who’s Who of Professionals published from Washington, D.C. USA. He was also nominated as International Educator of the year 2005 in International Biographical Centre, Cambridge, England.

 

Musial J.
Data and Knowledge Engineering Lab, School of Information Technology (SIT)
King Mongkut’s University of Technology Thonburi (KMUTT)

Topic: Internet Shopping Optimization: Current Practices and Upcoming Challenges

Abstract :

Time Series Classification is one of the areas in data mining which receives some attention recently. Control Chart Patterns (CCPs) can be considered as time series. Monitoring and recognition of CCPs is also an importance process in manufacturing. This implies that ability to classify CCPs with high accuracy is essential. This study attempts to implement CCPs classifiers which are capable of dealing with CCPs with different level of noise. Extracting image processing statistical features is adopted as preprocessing technique. The work also investigates the effect of level of noise in classification. Three different types of techniques for implementing classifiers are selected, these are Decision Tree, Neural network and an evolutionary based program, known as Self-adjusting Association Rules Generator (SARG). It was found that SARG yielded the best performance among them. To date, this study is an attempt to classify particular model of CCPs with highest level of noise.


Varrette S.

Topic : Amazon Elastic Compute Cloud (EC2) vs. in-House HPC Platform: a Cost Analysis

Abstract :

Since its advent in the middle of the 2000's, the Cloud Computing (CC) paradigm is increasingly advertised as THE solution to most IT problems. While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, several voices (most probably commercial ones) emit the wish that CC platforms could also serve HPC needs and eventually replace in-house HPC platforms. If we exclude the pure performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when submitted to an High Performance Computing (HPC) workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by evaluating the Total Cost of Ownership (TCO) of the in- house HPC facility we operate since 2007 within the University of Luxembourg (UL), and compare it with the investment that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is two-fold. First we propose a theoretical price - performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on our own cluster TCO and taking into account all the Operating Expense (OPEX), we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balance the common intuition in favor of Cloud Computing (CC) platforms, would they be provided by the reference Cloud provider worldwide.