An OptiAssign-PSO Based Optimisation for Multi-Objective Multi-Level Multi-Task Scheduling in Cloud Computing Environment
Advanced Analytics for Disease Tracking & Remote Intravenous Injection Monitoring
DevKraft - Fueling Collaboration in Coding Challenges
Comparative Security and Compliance Analysis of Serverless Computing Platforms: AWS Lambda, Azure Functions, and Google Cloud Functions
Blockchain Healthcare Management using Patients
A Comprehensive Review of Security Issues in Cloud Computing
An Extended Min-Min Scheduling Algorithm in Cloud Computing
Data Quality Evaluation Framework for Big Data
An Architectural Framework for Ant Lion Optimization-based Feature Selection Technique for Cloud Intrusion Detection System using Bayesian Classifier
Be Mindful of the Move: A Swot Analysis of Cloud Computing Towards the Democratization of Technology
GridSim Installation and Implementation Process
A Survey on Energy Aware Job Scheduling Algorithms in Cloud Environment
Genetic Algorithm Using MapReduce - A Critical Review
Clustering based Cost Optimized Resource Scheduling Technique in Cloud Computing
Encroachment of Cloud Education for the Present Educational Institutions
The volume of data usage is growing drastically day by day. Hence, it is not easy to maintain the data. In Big Data, huge amount of structured, semi-structured and unstructured data, produced daily by resources all over the world are stored in the computer. Mapreducing, a programming model, is used for implementing such large data sets. MapReduce program is used to collect data as per the request. To process the large volume of data, proper scheduling is used in order to achieve greater performance. Task scheduling plays a major role in Big Data cloud. Task scheduling contains a lot of rules to solve the problems of users and provides the quality of services to achieve the goal of that task to improve the resource utilization and turnaround time. Capacity and Delay Scheduling are used to improve the performance of the Big Data. This paper presents an overview of the Map-Reduce technique for shuffling and reducing the data and also the Capacity Scheduling and Delay Scheduling, for improving the reliability of the data
In this article the author explains how cloud education can provide reasonable and high-value education services for contemporary students, teachers, parents and administrators. Also the benefits for students and faculties by cloud education in educational institutions are discussed as cloud education is very necessary for the information society institution. Technology will get integrated into every aspect of the institutions and thus cloud education will change the classrooms, games fields, gyms and school trips. Whether offsite or onsite, the school, teachers, students and support staff will all be connected. In the cloud education system, all the classrooms will be paperless and the world will become the classroom. E-learning will change teaching and learning process in the educational situations. Students can learn from anywhere and teachers can teach from anywhere. The cloud can also encourage independent learning. Teachers could adopt a flipped classroom approach and students can take ownership of their own learning. Teachers can put resources for students through online to use. These could be videos, documents, audio podcasts or interactive images. All of these resources can be accessed through a student's computer, smart-phone or tablet with an internet connection by Wifi, 3G or 4G.
Cloud computing and big data are changing today’s modern on demand video service.This paper describes how to increase the speed of video transcoding in an open stack private cloud environment using Hadoop Map Reduce. In this paper, OpenStack Juno is used to build the private cloud infrastructure as a service having map code executing on the node, where the video transcoding resides, to significantly reduce this problem. This practice, called “video locality”, is one of the key advantages of Hadoop MapReduce. This scheme describes the deep relationship of a Hadoop Map Reduce algorithm and video transcoding in the experiment. As a result of Map Reduce video transcoding experiment in openstack Juno, outstanding performance of the physical server was observed when running on the virtual machine in the private cloud based on the metrics, in terms of Time Complexity and Quality Check using PSNR (Peak Signal-to-Noise Ratio).
With cloud storage services, it is routine that data is not only stored in the cloud, but also shared among group of users. However, public auditing for such distributed data at the same time by preserving identity privacy remains a challenge. This paper proposes a privacy preserving public auditing technique for shared data called ‘Oruta’ which supports auditing, data privacy and identity privacy. This scheme is mainly focused on ring signatures to calculate the information needed for verification to audit the integrity of shared data. By using this auditing, the signer identity on each block is kept unconditionally private from a Third Party Auditor (TPA). This paper also aims to implement traceability, i.e.., the group manager or the original user will know who are editing the data in some special situations based on verification of Metadata. By doing this, the privacy will be preserved and the group manager or the original user can trace the signer, so the original user can have complete control over the data and the signers who are accessing the data.
In big data applications, data privacy is one of the most important issues on processing large-scale privacy-sensitive data sets, which requires computation resources provisioned by public cloud services.It refers to the commercial "aggregation, mining, and analysis" of very large, complex and unstructured datasets. Due to its large size, discovering knowledge or obtaining pattern from big data within an elapsed time is a complicated task. The cloud and the advances in big data mining and analytics have expanded the scope of information available to businesses, government, and individuals. The internet users also share their private data like health records and financial transaction records for mining or data analysis purpose. For which, data anonymization is used for hiding identity or sensitive intelligence. This paper investigates the problem of big data anonymization for privacy preservation from the perspectives of scalability and cost-effectiveness. Anonymizing large scale data within a short span of time is a challenging task. To overcome that, Enhanced Top –Down Specialization approach (ETDS) can be developed which is an enhancement of Two –Phase Top Down Specialization approach (TPTDS). Accordingly, a scalable and cost-effective privacy preserving framework is developed to provide a holistic conceptual foundation for privacy preservation over big data which enable users to accomplish the full potential of the high scalability, elasticity, and cost-effectiveness of the cloud. The multidimensional anonymization of MapReducing framework will increase the efficiency of the big data processing system.