Parallelism has been employed for many years, for high performance computing. Parallel computers can be classifiedaccording to the level at which the hardware supports parallelism with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, Massively Parallel Processors (MPPs), and grids use multiple computer to work on the same task. Scheduling and Mapping of heterogeneous tasks to heterogeneous processor dynamically in a distributed environment has been one of the challenging area of research in the field of Grid Computing System. Several general purpose approaches with some modified techniques has been developed. This paper presents a comparative study of different algorithms such as Directed Search Optimization (DSO) trained Artificial Neural Network (ANN), Parallel Orthogonal Particle Swarm Optimization (POPSO), Lazy Ant Colony Optimization (LACO), and Genetic Algorithms (GA), in the basis of workflow scheduling in grid environment of multiprocessors. It also presents various heuristic based methods used in task scheduling.
In today's technological world, leakage of confidential data within an organisation is considered as a severe crime, and is one of the major security threats in the organisations. Sharing of confidential data should be secured such that, it should not be accessed by others and also the agents to whom all the confidential data has been provided should be tracked. This paper proposes a system to identify the guilty party who has leaked the confidential data which has been distributed by the owner of the data, within the organization. To distribute the data, a data allocation strategy has been proposed which will monitor with whom all the data has been shared. To provide security for the data object and to keep track of the data lineage of the distributed data, a watermarking technique is proposed. An encrypted secret key is used to watermark the confidential data, which will be unique for every agent.
In recent years, there has been a lot of interest in Mobile Ad hoc Networks (MANET). Mobile ad hoc networks are groups of mobile nodes. These nodes send and receive messages, and can communicate with each other. Thus, the network builds its own network structure, that is not dependent on the infrastructure central administration. MANET has been used in environments of poor communication, such as those in which the infrastructure cannot be built like, disaster areas, and war zones. In addition to its key characteristics, MANET node contains various attribute information, such as mobility, velocity, and energy. However, the MANET has constraints, such as transmission band and energy consumption. These constraints cause disconnections between the nodes and rerouting. Most of the researchers in MANET focus on the clustering based routing protocol which can sufficiently enhance performance, scalability, and energy efficiency of sufficient routing protocol for mobile adhoc network. In this paper, performance analysis of cluster based energy efficient routing protocols such as Low Energy Adaptive Clustering Hierarchy (LEACH) and Hybrid Energy – Efficient Distributed (HEED) protocols has been carried out for MANET through network simulation. The HEED protocol provided better performance compared to LEACH protocol and also increased throughput with number of nodes, transmission range, and mobility.
Web based application development using .NET and Quantum Geographic Information System (QGIS) is shown in this research paper. QGIS is a free widespread open source software to process geographical data, and Web-based GIS applications are online services over the internet that provide maps to the users and help them to search and browse Spatial Information like locating different places and routes. Web based Tourism GIS is developed using a mix of programming, and database technologies. Tourism has become a popular global source of income, both nationally, and internationally. Tourism Web GIS application provides basic information using GIS for facilitating, planning and increasing the tourism.
This is an era of High Performance Computing (HPC), in which more computing elements work together in parallel to enhance the computing power. Scheduling plays an important role to enhance the computing power in the multiprocessor environment. Scheduling of the tasks onto the machines of a multiprocessor environment has generally become a NP-complete (nondeterministic polynomial time) problem. Heuristic is a better way to solve the NP-complete problems. List heuristic scheduling is one of the best and widely used heuristic to use in a homogeneous multiprocessor environment. However, to achieve parallelism in multiprocessor environment is not so easy; it is influenced by various factors such as dependencies, system environment etc. In this paper, some list scheduling algorithms for dependent task sets are studied and their performance are evaluated on the basis of makespan. The EST (Earliest Processing Time) scheduling algorithm performs better and takes less makespan as compared to other scheduling algorithms namely LPT (Longest Processing Time), SPT (Shortest Processing Time) and ECT (Earliest Completion Time). The graphs show that EST performs better when the number of processes increases, than other algorithms.
With elevation in the technologies, parallel computing has become an apparent area with the aspiration of providing adequate and faster result for various applications. Parallel computing focuses on parallel processing between operations in some way. Today, parallel computing is growing at an accelerating rate. Parallel computing emphasizes parallel and concurrent computation among different processors. Parallel computing refers to sub-division of a problem, and executing these sub problems simultaneously. Imbalance in the load is a well-known problem in the region which involves parallelism. One of the challenging tasks of parallel computing is to balance the load among processors. Load balancing optimizes the way processing load is shared among the multiple processors. Load balancing algorithms are broadly categorized as Static and Dynamic. Static load balancing distributes the processes to processors at compile time, while dynamic load balancing bind processes at run time. Static load balancing algorithms may be either deterministic or probabilistic. Dynamic load balancing is further categorized as centralized and distributed. This paper confers about the study and analysis of load balancing approaches in multiprocessor systems. Performances of sender initiated load balancing is observed on the basis of execution time, speedup and steal ratio. To implement load balancing algorithms, scale simulator is used.
Cloud computing is a new generation utility computing. It provides the control to use computing as a utility which can be used anywhere at any time. It's highly elastic and can be grown or shrink according to user demand. The elasticity of computing power in cloud is based on the migration of virtual machine from overutilized servers to underutilized servers and vice-versa.Virtual machine migration (VMM) is used to reduce the power consumption of cloud environment that leads to green computing. In virtual Machine Migration, virtual machines are migrated from one physical server to another physical server that may lead to security threats like Replay, 'Time-of-Check' to 'Time-of-Use' (TOCTTOU), Resumption Ordering etc. Several experiments have been conducted by using KVM/QEMU(Kernel-based Virtual Machine/Quick Emulator) hypervisor. It is found that tampering of data by Man-In-The-Middle (MITM) is possible in information gathering phase and TOCTTOU can be injected. This may lead to serious security threat and can create hotspot at the destination host, which can degrade the performance of overall cloud experience. Hotspot is the situation where physical host is not able to fulfil the requested resources requirement. In this paper, a Two-level Security Framework has been proposed for protecting the VMM process from tampering of data and TOCTTOU problem. Further, the results of proposed technique have been compared with predefined RSA (Rivest–Shamir–Adleman) encryption and decryption technique in terms of time that can be used to protect the tampering of data in information gathering phase. The results indicate that this proposed technique reduces the time from 12.2 to 10.3 seconds (network size of 28 physical host) for protecting the data in information gathering phase of virtual machine migration process.
Due to rapid development in the web era, one must know the main issues which are involved in the design, evaluation, and enhancement of the website. These issues carry out the identical importance for all persons, whether they are designers, users or managers. These issues are also having a large implication for researchers, who delve into the discipline of web evaluation. User satisfaction plays a major role in the success of a website. This paper aims to discuss the issues related to the discipline of web development and enhancement. In-depth study of reputed research papers has been conducted to determine different persons involved in website designing and to identify their goals, as well as to find different domains of websites. Different evaluation criteria and evaluation methods have also been attained from the previous studies conducted in the discipline. The paper concludes with the proposal of few guidelines, which must be kept in mind during the website construction and evaluation. A modular approach for the website evaluation is proposed by recognizing the role of the various persons involved while achieving a quality website.