Many plans have been of late progressed for putting away records on more than one mists. Appropriating realities over select cloud carport sellers CSPs regularly manages the cost of clients with a definite certificate of measurements spillage control, for no single place of assault can release every one of the insights. In any case, spontaneous circulation of realities lumps can bring about high measurements revelation even as utilizing two or three mists. In this paper, we notice a pivotal insights spillage inconvenience coming about because of spontaneous records dissemination in multicloud capacity administrations. We plan an inexact calculation to proficiently produce likeness protecting marks for insights lumps dependent on MinHash and Bloom channel, and furthermore plan a component to process the data spillage fundamentally dependent on those marks. Then, we gift an amazing stockpiling plan period calculation basically dependent on bunching for administering information pieces with insignificant information spillage all through various mists. At long last, we analyze our plan the utilization of two genuine datasets from Wikipedia and GitHub. We show that our plan can reduce the data spillage with the guide of as much as 60 in contrast with impromptu position. Moreover, our examination on device assault limit exhibits that our plan makes attacks on realities more muddled. Mr. Abhijit Paruchuri | Mr. Maddirala Jagadish | Dr. M. China Rao "The Optimizing Information Leakage in Multicloud Storage Services" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47948.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/47948/the-optimizing-information-leakage-in-multicloud-storage-services/mr-abhijit-paruchuri
The document provides information about Pig Latin, the data processing language used with Apache Pig. It discusses Pig Latin basics like the data model, relations, tuples, and fields. It also covers Pig Latin statements, loading and storing data, data types, relational operations like group, join, cross, union, and diagnostic operators like dump and describe.
Amazon SimpleDB is a hosted database service that allows developers to store and query structured data via web services requests without needing to worry about data modeling, index maintenance, or performance tuning. It is a flexible, scalable, and inexpensive key-value store that automatically indexes data and enables real-time lookups and simple queries without complexity. Netflix migrated parts of its database from Oracle to Amazon SimpleDB to take advantage of its high availability, flexibility, and cost effectiveness compared to running its own data centers.
Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem faster. It allows for larger problems to be solved and provides cost savings over serial computing. There are different models of parallelism including data parallelism and task parallelism. Flynn's taxonomy categorizes computer architectures as SISD, SIMD, MISD and MIMD based on how instructions and data are handled. Shared memory and distributed memory are two common architectures that differ in scalability and communication handling. Programming models include shared memory, message passing and data parallel approaches. Design considerations for parallel programs include partitioning work, communication between processes, and synchronization.
This document discusses different types of distributed databases. It covers data models like relational, aggregate-oriented, key-value, and document models. It also discusses different distribution models like sharding and replication. Consistency models for distributed databases are explained including eventual consistency and the CAP theorem. Key-value stores are described in more detail as a simple but widely used data model with features like consistency, scaling, and suitable use cases. Specific key-value databases like Redis, Riak, and DynamoDB are mentioned.
Mach is an object-oriented operating system kernel. It defines basic abstractions like tasks, threads, ports, and memory objects. Tasks provide execution environments, threads are units of execution, ports enable communication between tasks using messages, and memory objects are sources of shared memory. Tasks communicate asynchronously through message passing to ports using the mach_msg system call.
This document discusses real-time big data applications and provides a reference architecture for search, discovery, and analytics. It describes combining analytical and operational workloads using a unified data model and operational database. Examples are given of organizations using this approach for real-time search, analytics and continuous adaptation of large and diverse datasets.
Data preprocessing techniques are applied before mining. These can improve the overall quality of the patterns mined and the time required for the actual mining.
Some important data preprocessing that must be needed before applying the data mining algorithm to any data sets are completely described in these slides.
Practical advices how to achieve persistence in Redis. Detailed overview of all cons and pros of RDB snapshots and AOF logging. Tips and tricks for proper persistence configuration with Redis pools and master/slave replication.
The document provides information about Pig Latin, the data processing language used with Apache Pig. It discusses Pig Latin basics like the data model, relations, tuples, and fields. It also covers Pig Latin statements, loading and storing data, data types, relational operations like group, join, cross, union, and diagnostic operators like dump and describe.
Amazon SimpleDB is a hosted database service that allows developers to store and query structured data via web services requests without needing to worry about data modeling, index maintenance, or performance tuning. It is a flexible, scalable, and inexpensive key-value store that automatically indexes data and enables real-time lookups and simple queries without complexity. Netflix migrated parts of its database from Oracle to Amazon SimpleDB to take advantage of its high availability, flexibility, and cost effectiveness compared to running its own data centers.
Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem faster. It allows for larger problems to be solved and provides cost savings over serial computing. There are different models of parallelism including data parallelism and task parallelism. Flynn's taxonomy categorizes computer architectures as SISD, SIMD, MISD and MIMD based on how instructions and data are handled. Shared memory and distributed memory are two common architectures that differ in scalability and communication handling. Programming models include shared memory, message passing and data parallel approaches. Design considerations for parallel programs include partitioning work, communication between processes, and synchronization.
This document discusses different types of distributed databases. It covers data models like relational, aggregate-oriented, key-value, and document models. It also discusses different distribution models like sharding and replication. Consistency models for distributed databases are explained including eventual consistency and the CAP theorem. Key-value stores are described in more detail as a simple but widely used data model with features like consistency, scaling, and suitable use cases. Specific key-value databases like Redis, Riak, and DynamoDB are mentioned.
Mach is an object-oriented operating system kernel. It defines basic abstractions like tasks, threads, ports, and memory objects. Tasks provide execution environments, threads are units of execution, ports enable communication between tasks using messages, and memory objects are sources of shared memory. Tasks communicate asynchronously through message passing to ports using the mach_msg system call.
This document discusses real-time big data applications and provides a reference architecture for search, discovery, and analytics. It describes combining analytical and operational workloads using a unified data model and operational database. Examples are given of organizations using this approach for real-time search, analytics and continuous adaptation of large and diverse datasets.
Data preprocessing techniques are applied before mining. These can improve the overall quality of the patterns mined and the time required for the actual mining.
Some important data preprocessing that must be needed before applying the data mining algorithm to any data sets are completely described in these slides.
Practical advices how to achieve persistence in Redis. Detailed overview of all cons and pros of RDB snapshots and AOF logging. Tips and tricks for proper persistence configuration with Redis pools and master/slave replication.
Casandra is a open-source, distributed, highly scalable and fault-tolerant database. It is a best choice for managing structured, semi-structured or unstructured data at a large amount.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download
This document provides an overview and introduction to NoSQL databases. It discusses key-value stores like Dynamo and BigTable, which are distributed, scalable databases that sacrifice complex queries for availability and performance. It also explains column-oriented databases like Cassandra that scale to massive workloads. The document compares the CAP theorem and consistency models of these databases and provides examples of their architectures, data models, and operations.
MongoDB has taken a clear lead in adoption among the new generation of databases, including the enormous variety of NoSQL offerings. A key reason for this lead has been a unique combination of agility and scalability. Agility provides business units with a quick start and flexibility to maintain development velocity, despite changing data and requirements. Scalability maintains that flexibility while providing fast, interactive performance as data volume and usage increase. We'll address the key organizational, operational, and engineering considerations to ensure that agility and scalability stay aligned at increasing scale, from small development instances to web-scale applications. We will also survey some key examples of highly-scaled customer applications of MongoDB.
Cassandra is an open source, distributed database management system designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability and performance, as well as flexibility in schemas. Cassandra finds use in large companies like Facebook, Netflix and eBay due to its abilities to scale and perform well under heavy loads. However, it may not be suited for applications requiring many joins, transactions or strong consistency guarantees.
This document provides an overview of non-relational (NoSQL) databases. It discusses the history and characteristics of NoSQL databases, including that they do not require rigid schemas and can automatically scale across servers. The document also categorizes major types of NoSQL databases, describes some popular NoSQL databases like Dynamo and Cassandra, and discusses benefits and limitations of both SQL and NoSQL databases.
The document discusses the rise of NoSQL databases. It notes that NoSQL databases are designed to run on clusters of commodity hardware, making them better suited than relational databases for large-scale data and web-scale applications. The document also discusses some of the limitations of relational databases, including the impedance mismatch between relational and in-memory data structures and their inability to easily scale across clusters. This has led many large websites and organizations handling big data to adopt NoSQL databases that are more performant and scalable.
Apache Cassandra is a free, distributed, open source, and highly scalable NoSQL database that is designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability, and tunable consistency. Cassandra's architecture allows it to spread data across a cluster of servers and replicate across multiple data centers for fault tolerance. It is used by many large companies for applications that require high performance, scalability, and availability.
This document provides an introduction to the Pig analytics platform for Hadoop. It begins with an overview of big data and Hadoop, then discusses the basics of Pig including its data model, language called Pig Latin, and components. Key points made are that Pig provides a high-level language for expressing data analysis processes, compiles queries into MapReduce programs for execution, and allows for easier programming than lower-level systems like Java MapReduce. The document also compares Pig to SQL and Hive, and demonstrates visualizing Pig jobs with the Twitter Ambrose tool.
Big Data raises challenges about how to process such vast pool of raw data and how to aggregate value to our lives. For addressing these demands an ecosystem of tools named Hadoop was conceived.
This document summarizes challenges with large partitions in Cassandra and potential solutions. When a large partition is read, the key cache can cause garbage collection pressure as it stores the partition's index on the Java heap. Currently, the index is stored off-heap only if the partition exceeds a configurable size, otherwise it is kept on-heap. Fully migrating the key cache off-heap is another potential solution but incurs serialization costs.
This document outlines the scheme and syllabus for the MSc Computer Science program offered by the Department of Computer Science at the University of Kerala.
The program is a 4 semester course with 3 semesters of taught courses and 1 semester of project work. The first two semesters cover core subjects like discrete structures, computer architecture, data structures, algorithms, operating systems, databases etc. along with electives. The third semester has more advanced subjects and electives. The fourth semester involves a project and viva voce.
The syllabus details are provided for the core subjects to be taught in the first semester, including topics like logic, graphs, automata theory for discrete structures, computer components, memory organization
The presentation begins with an overview of the growth of non-structured data and the benefits NoSQL products provide. It then provides an evaluation of the more popular NoSQL products on the market including MongoDB, Cassandra, Neo4J, and Redis. With NoSQL architectures becoming an increasingly appealing database management option for many organizations, this presentation will help you effectively evaluate the most popular NoSQL offerings and determine which one best meets your business needs.
This document provides an introduction to database management systems (DBMS). It discusses what a DBMS is, common database applications, and drawbacks of using file systems to store data that DBMS aim to address, such as data redundancy, integrity issues, and concurrent access problems. It also summarizes key components of a DBMS, including its logical and physical levels of abstraction, data models, data definition and manipulation languages, storage management, query processing, transaction management, and common database architectures.
The document discusses different database system architectures including centralized, client-server, server-based transaction processing, data servers, parallel, and distributed systems. It covers key aspects of each architecture such as hardware components, process structure, advantages and limitations. The main types are centralized systems with one computer, client-server with backend database servers and frontend tools, parallel systems using multiple processors for improved performance, and distributed systems with data and users spread across a network.
This document provides an overview of key concepts related to data warehousing including what a data warehouse is, common data warehouse architectures, types of data warehouses, and dimensional modeling techniques. It defines key terms like facts, dimensions, star schemas, and snowflake schemas and provides examples of each. It also discusses business intelligence tools that can analyze and extract insights from data warehouses.
This document discusses vector processing and multiprocessor principles. It explains that vector processing performs operations on vectors to gain speedups of 10-20x over scalar processing. Multiprocessor systems use two or more CPUs for advantages like reduced costs, increased reliability and throughput. They can implement techniques like multitasking, multithreading and multiprogramming to execute multiple tasks simultaneously.
This document provides an overview of MongoDB sharding. It discusses how MongoDB addresses the need for horizontal scalability as data and throughput needs exceed the capabilities of a single machine. MongoDB uses sharding to partition data across multiple machines or shards. The key points are:
- MongoDB shards or partitions data by a shard key, distributing data ranges across shards for scalability.
- A configuration server stores metadata about sharding setup and chunk distribution. Mongos instances route queries to appropriate shards.
- MongoDB automatically splits and migrates chunks as data grows to balance load across shards.
- Setting up sharding in MongoDB requires minimal configuration and provides a consistent interface like a single database.
The document provides an introduction and overview of MongoDB, including what NoSQL is, the different types of NoSQL databases, when to use MongoDB, its key features like scalability and flexibility, how to install and use basic commands like creating databases and collections, and references for further learning.
Maintaining Secure Cloud by Continuous Auditingijtsrd
Increases in cloud computing capacity, as well as decreases in the cost of processing, are moving at a fast pace. These patterns make it incumbent upon organizations to keep pace with changes in technology that significantly influence security. Cloud security auditing depends upon the environment, and the rapid growth of cloud computing is an important new context in world economics. The small price of entry, bandwidth, and processing power capability means that individuals and organizations of all sizes have more capacity and agility to exercise shifts in computation and to disrupt industry in cyberspace than more traditional domains of business economics worldwide. An analysis of prevalent cloud security issues and the utilization of cloud audit methods can mitigate security concerns. This verification methodology indicates how to use frameworks to review cloud service providers (CSPs). The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. M. Kanimozhi | A. Aishwarya | S. Triumal"Maintaining Secure Cloud by Continuous Auditing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd10829.pdf http://www.ijtsrd.com/engineering/computer-engineering/10829/maintaining-secure-cloud-by-continuous-auditing/m-kanimozhi
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
Casandra is a open-source, distributed, highly scalable and fault-tolerant database. It is a best choice for managing structured, semi-structured or unstructured data at a large amount.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download
This document provides an overview and introduction to NoSQL databases. It discusses key-value stores like Dynamo and BigTable, which are distributed, scalable databases that sacrifice complex queries for availability and performance. It also explains column-oriented databases like Cassandra that scale to massive workloads. The document compares the CAP theorem and consistency models of these databases and provides examples of their architectures, data models, and operations.
MongoDB has taken a clear lead in adoption among the new generation of databases, including the enormous variety of NoSQL offerings. A key reason for this lead has been a unique combination of agility and scalability. Agility provides business units with a quick start and flexibility to maintain development velocity, despite changing data and requirements. Scalability maintains that flexibility while providing fast, interactive performance as data volume and usage increase. We'll address the key organizational, operational, and engineering considerations to ensure that agility and scalability stay aligned at increasing scale, from small development instances to web-scale applications. We will also survey some key examples of highly-scaled customer applications of MongoDB.
Cassandra is an open source, distributed database management system designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability and performance, as well as flexibility in schemas. Cassandra finds use in large companies like Facebook, Netflix and eBay due to its abilities to scale and perform well under heavy loads. However, it may not be suited for applications requiring many joins, transactions or strong consistency guarantees.
This document provides an overview of non-relational (NoSQL) databases. It discusses the history and characteristics of NoSQL databases, including that they do not require rigid schemas and can automatically scale across servers. The document also categorizes major types of NoSQL databases, describes some popular NoSQL databases like Dynamo and Cassandra, and discusses benefits and limitations of both SQL and NoSQL databases.
The document discusses the rise of NoSQL databases. It notes that NoSQL databases are designed to run on clusters of commodity hardware, making them better suited than relational databases for large-scale data and web-scale applications. The document also discusses some of the limitations of relational databases, including the impedance mismatch between relational and in-memory data structures and their inability to easily scale across clusters. This has led many large websites and organizations handling big data to adopt NoSQL databases that are more performant and scalable.
Apache Cassandra is a free, distributed, open source, and highly scalable NoSQL database that is designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability, and tunable consistency. Cassandra's architecture allows it to spread data across a cluster of servers and replicate across multiple data centers for fault tolerance. It is used by many large companies for applications that require high performance, scalability, and availability.
This document provides an introduction to the Pig analytics platform for Hadoop. It begins with an overview of big data and Hadoop, then discusses the basics of Pig including its data model, language called Pig Latin, and components. Key points made are that Pig provides a high-level language for expressing data analysis processes, compiles queries into MapReduce programs for execution, and allows for easier programming than lower-level systems like Java MapReduce. The document also compares Pig to SQL and Hive, and demonstrates visualizing Pig jobs with the Twitter Ambrose tool.
Big Data raises challenges about how to process such vast pool of raw data and how to aggregate value to our lives. For addressing these demands an ecosystem of tools named Hadoop was conceived.
This document summarizes challenges with large partitions in Cassandra and potential solutions. When a large partition is read, the key cache can cause garbage collection pressure as it stores the partition's index on the Java heap. Currently, the index is stored off-heap only if the partition exceeds a configurable size, otherwise it is kept on-heap. Fully migrating the key cache off-heap is another potential solution but incurs serialization costs.
This document outlines the scheme and syllabus for the MSc Computer Science program offered by the Department of Computer Science at the University of Kerala.
The program is a 4 semester course with 3 semesters of taught courses and 1 semester of project work. The first two semesters cover core subjects like discrete structures, computer architecture, data structures, algorithms, operating systems, databases etc. along with electives. The third semester has more advanced subjects and electives. The fourth semester involves a project and viva voce.
The syllabus details are provided for the core subjects to be taught in the first semester, including topics like logic, graphs, automata theory for discrete structures, computer components, memory organization
The presentation begins with an overview of the growth of non-structured data and the benefits NoSQL products provide. It then provides an evaluation of the more popular NoSQL products on the market including MongoDB, Cassandra, Neo4J, and Redis. With NoSQL architectures becoming an increasingly appealing database management option for many organizations, this presentation will help you effectively evaluate the most popular NoSQL offerings and determine which one best meets your business needs.
This document provides an introduction to database management systems (DBMS). It discusses what a DBMS is, common database applications, and drawbacks of using file systems to store data that DBMS aim to address, such as data redundancy, integrity issues, and concurrent access problems. It also summarizes key components of a DBMS, including its logical and physical levels of abstraction, data models, data definition and manipulation languages, storage management, query processing, transaction management, and common database architectures.
The document discusses different database system architectures including centralized, client-server, server-based transaction processing, data servers, parallel, and distributed systems. It covers key aspects of each architecture such as hardware components, process structure, advantages and limitations. The main types are centralized systems with one computer, client-server with backend database servers and frontend tools, parallel systems using multiple processors for improved performance, and distributed systems with data and users spread across a network.
This document provides an overview of key concepts related to data warehousing including what a data warehouse is, common data warehouse architectures, types of data warehouses, and dimensional modeling techniques. It defines key terms like facts, dimensions, star schemas, and snowflake schemas and provides examples of each. It also discusses business intelligence tools that can analyze and extract insights from data warehouses.
This document discusses vector processing and multiprocessor principles. It explains that vector processing performs operations on vectors to gain speedups of 10-20x over scalar processing. Multiprocessor systems use two or more CPUs for advantages like reduced costs, increased reliability and throughput. They can implement techniques like multitasking, multithreading and multiprogramming to execute multiple tasks simultaneously.
This document provides an overview of MongoDB sharding. It discusses how MongoDB addresses the need for horizontal scalability as data and throughput needs exceed the capabilities of a single machine. MongoDB uses sharding to partition data across multiple machines or shards. The key points are:
- MongoDB shards or partitions data by a shard key, distributing data ranges across shards for scalability.
- A configuration server stores metadata about sharding setup and chunk distribution. Mongos instances route queries to appropriate shards.
- MongoDB automatically splits and migrates chunks as data grows to balance load across shards.
- Setting up sharding in MongoDB requires minimal configuration and provides a consistent interface like a single database.
The document provides an introduction and overview of MongoDB, including what NoSQL is, the different types of NoSQL databases, when to use MongoDB, its key features like scalability and flexibility, how to install and use basic commands like creating databases and collections, and references for further learning.
Maintaining Secure Cloud by Continuous Auditingijtsrd
Increases in cloud computing capacity, as well as decreases in the cost of processing, are moving at a fast pace. These patterns make it incumbent upon organizations to keep pace with changes in technology that significantly influence security. Cloud security auditing depends upon the environment, and the rapid growth of cloud computing is an important new context in world economics. The small price of entry, bandwidth, and processing power capability means that individuals and organizations of all sizes have more capacity and agility to exercise shifts in computation and to disrupt industry in cyberspace than more traditional domains of business economics worldwide. An analysis of prevalent cloud security issues and the utilization of cloud audit methods can mitigate security concerns. This verification methodology indicates how to use frameworks to review cloud service providers (CSPs). The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. M. Kanimozhi | A. Aishwarya | S. Triumal"Maintaining Secure Cloud by Continuous Auditing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd10829.pdf http://www.ijtsrd.com/engineering/computer-engineering/10829/maintaining-secure-cloud-by-continuous-auditing/m-kanimozhi
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
Privacy preserving public auditing for secured cloud storagedbpublications
As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud+. SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud+ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
IRJET- An Data Sharing in Group Member with High Security using Symmetric Bal...IRJET Journal
This document proposes a novel approach for secure group data sharing in cloud computing using a symmetric balanced incomplete block design (SBIBD). The approach aims to enable anonymous yet traceable data sharing among group members in the cloud. It uses the concept of group signature and SBIBD to efficiently generate a common group key for encrypting and decrypting shared data, providing security while supporting group data sharing. The SBIBD and group signature techniques are used to achieve privacy, security, traceability and efficient access control. Experimental results demonstrate the effectiveness of generating the group key and uploading/accessing encrypted files through the group manager. The proposed approach supports dynamic membership changes and is secure and efficient for group data sharing in cloud computing.
BIG DATA SECURITY AND PRIVACY ISSUES IN THE CLOUD IJNSA Journal
Many organizations demand efficient solutions to store and analyze huge amount of information. Cloud computing as an enabler provides scalable resources and significant economic benefits in the form of reduced operational costs. This paradigm raises a broad range of security and privacy issues that must be taken into consideration. Multi-tenancy, loss of control, and trust are key challenges in cloud computing environments. This paper reviews the existing technologies and a wide array of both earlier and state-ofthe-art projects on cloud security and privacy. We categorize the existing research according to the cloud reference architecture orchestration, resource control, physical resource, and cloud service management layers, in addition to reviewing the recent developments for enhancing the Apache Hadoop security as one of the most deployed big data infrastructures. We also outline the frontier research on privacy-preserving data-intensive applications in cloud computing such as privacy threat modeling and privacy enhancing solutions.
Big data security and privacy issues in theIJNSA Journal
Many organizations demand efficient solutions to store and analyze huge amount of information. Cloud computing as an enabler provides scalable resources and significant economic benefits in the form of reduced operational costs. This paradigm raises a broad range of security and privacy issues that must be taken into consideration. Multi-tenancy, loss of control, and trust are key challenges in cloud computing environments. This paper reviews the existing technologies and a wide array of both earlier and state-ofthe-art projects on cloud security and privacy. We categorize the existing research according to the cloud reference architecture orchestration, resource control, physical resource, and cloud service management layers, in addition to reviewing the recent developments for enhancing the Apache Hadoop security as one of the most deployed big data infrastructures. We also outline the frontier research on privacy-preserving data-intensive applications in cloud computing such as privacy threat modeling and privacy enhancing solutions.
This document discusses using Azure HDInsight for big data applications. It provides an overview of HDInsight and describes how it can be used for various big data scenarios like modern data warehousing, advanced analytics, and IoT. It also discusses the architecture and components of HDInsight, how to create and manage HDInsight clusters, and how HDInsight integrates with other Azure services for big data and analytics workloads.
28 15141Secure Data Sharing with Data Partitioning in Big Data33289 24 12-2017rahulmonikasharma
Hadoop is a framework for the transformation analysis of very huge data. This paper presents an distributed approach for data storage with the help of Hadoop distributed file system (HDFS). This scheme overcomes the drawbacks of other data storage scheme, as it stores the data in distributed format. So, there is no chance of any loss of data. HDFS stores the data in the form of replica’s, it is advantageous in case of failure of any node; user is able to easily recover from data loss unlike other storage system, if you loss then you cannot. We have implemented ID-Based Ring Signature Scheme to provide secure data sharing among the network, so that only authorized person have access to the data. System is became more attack prone with the help of Advanced Encryption Standard (AES). Even if attacker is successful in getting source data but it’s unable to decode it.
Big Data Analytics in the Cloud for Business Intelligence.docxVENKATAAVINASH10
The document discusses how cloud computing and big data analytics can be combined to provide powerful benefits for businesses when it comes to business intelligence. It outlines that cloud computing reduces costs for businesses by providing on-demand access to computing resources, while big data analytics allows businesses to gain insights from large amounts of data. However, big data analysis requires significant computing power that can be expensive for small-to-medium enterprises. The document argues that cloud computing can provide the storage and processing power needed for big data analytics in a cost-effective way for businesses. Combining these two technologies has the potential to improve decision making for businesses.
International Journal on Cloud Computing: Services and Architecture (IJCCSA)ijccsa
Cloud computing helps enterprises transform business and technology. Companies have begun to look for solutions that would help reduce their infrastructures costs and improve profitability. Cloud computing is becoming a foundation for benefits well beyond IT cost savings. Yet, many business leaders are concerned about cloud security, privacy, availability, and data protection. To discuss and address these issues, we invite researches who focus on cloud computing to shed more light on this emerging field. This peer-reviewed open access Journal aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing
IRJET- Secure Re-Encrypted PHR Shared to Users Efficiently in Cloud ComputingIRJET Journal
This document proposes a Securely Re-Encrypted PHR Shared to Users Efficiently in Cloud Computing (SeSPHR) system. The SeSPHR system aims to securely store and share patients' Personal Health Records (PHRs) with authorized entities in the cloud while preserving privacy. It encrypts PHRs stored on untrusted cloud servers and only allows verified users access using re-encryption keys from a semi-trusted proxy server. The system enforces patient-centric access management of PHR components based on access levels and supports dynamic addition and removal of authorized users. The operation of SeSPHR was analyzed and verified using High-Level Petri Nets, SMT-Lib and Z3 solver. Performance analysis
An Efficient and Safe Data Sharing Scheme for Mobile Cloud Computingijtsrd
As the popularity of cloud computing is increasing, mobile devices at any time can store or retrieve personal information from anywhere. As a result, the issue of data protection in the mobile cloud is becoming increasingly severe and prevents more mobile cloud computing. There are important studies that have been carried out to strengthen the protection of the cloud. Most of them, however, are not applicable to mobile clouds, as mobile devices have restricted computing resources and power. In this paper, I propose a lightweight data sharing scheme LDSS for mobile cloud computing. It uses CP ABE Cipher text Policy Attribute Based Encryption , an access control technology used in basic cloud atmosphere, but changes the structure of access control tree to make it suitable for mobile cloud environments. LDSS moves a large portion of the computational rigorous access control tree transformation in CP ABE Cipher text Policy Attribute Based Encryption from mobile devices to external proxy servers. Also, to reduce the user revocation cost, it introduces attribute description fields to implement lazy revocation, which is a pointed issue in program based CP ABE Cipher text Policy Attribute Based Encryption systems. The trial results show that LDSS can effectively lower the overhead on the mobile device side when users are sharing information in mobile cloud environments. Abhishek. D | Dr. Lakshmi J. V. N "An Efficient and Safe Data Sharing Scheme for Mobile Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd35909.pdf Paper URL : https://www.ijtsrd.com/computer-science/computer-security/35909/an-efficient-and-safe-data-sharing-scheme-for-mobile-cloud-computing/abhishek-d
A Comprehensive Study on Big Data Applications and Challengesijcisjournal
Big Data has gained much interest from the academia and the IT industry. In the digital and computing
world, information is generated and collected at a rate that quickly exceeds the boundary range. As
information is transferred and shared at light speed on optic fiber and wireless networks, the volume of
data and the speed of market growth increase. Conversely, the fast growth rate of such large data
generates copious challenges, such as the rapid growth of data, transfer speed, diverse data, and security.
Even so, Big Data is still in its early stage, and the domain has not been reviewed in general. Hence, this
study expansively surveys and classifies an assortment of attributes of Big Data, including its nature,
definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a
data life cycle that uses the technologies and terminologies of Big Data. Map/Reduce is a programming
model for efficient distributed computing. It works well with semi-structured and unstructured data. A
simple model but good for a lot of applications like Log processing and Web index building.
Authors are invited to submit papers for this journal through E-mail: ijccsajournal@airccse.org. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
A New Multi-Dimensional Hyperbolic Structure for Cloud Service Indexingijdms
Nowadays, the structure of cloud uses the data processing of simple key value, which cannot adapted in
search of closeness effectively because of the lack of structures of effective indexes, and with the increase of
dimension, the structures of indexes similar to the tree existing could lead to the problem " of the curse of
dimension”. In this paper, we define a new cloud computing architecture in such global index, storage
nodes is based on a Distributed Hash Table (DHT). In order to reduce the cost of calculation, store
different services on the hyperbolic tree by using virtual coordinates thus that greedy routing based on
hyperbolic space properties in order to improve query storage and retrieving performance effectively. Next,
we perform and evaluate our cloud computing indexing structure based on a hyperbolic tree using virtual
coordinates taken in the hyperbolic plane. We show through our experimental results that we compare with
others clouds systems to show our solution ensures consistence and scalability for Cloud platform.
International Journal on Cloud Computing: Services and Architecture (IJCCSA)ijccsa
Cloud computing helps enterprises transform business and technology. Companies have begun to look for solutions that would help reduce their infrastructures costs and improve profitability. Cloud computing is becoming a foundation for benefits well beyond IT cost savings. Yet, many business leaders are concerned about cloud security, privacy, availability, and data protection. To discuss and address these issues, we invite researches who focus on cloud computing to shed more light on this emerging field. This peer-reviewed open access Journal aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, including (but not limited to):
Cloud computing helps enterprises transform business and technology. Companies have
begun to look for solutions that would help reduce their infrastructures costs and improve
profitability. Cloud computing is becoming a foundation for benefits well beyond IT cost
savings. Yet, many business leaders are concerned about cloud security, privacy, availability,
and data protection. To discuss and address these issues, we invite researches who focus on
cloud computing to shed more light on this emerging field. This peer-reviewed open access
Journal aims to bring together researchers and practitioners in all security aspects of cloudcentric and outsourced computing, including (but not limited to):
International Journal on Cloud Computing: Services and Architecture (IJCCSA)ijccsa
Cloud computing helps enterprises transform business and technology. Companies have begun to look for solutions that would help reduce their infrastructures costs and improve profitability. Cloud computing is becoming a foundation for benefits well beyond IT cost savings. Yet, many business leaders are concerned about cloud security, privacy, availability, and data protection. To discuss and address these issues, we invite researches who focus on cloud computing to shed more light on this emerging field. This peer-reviewed open access Journal aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, including (but not limited to):
International Journal on Cloud Computing: Services and Architecture (IJCCSA)ijccsa
Cloud computing helps enterprises transform business and technology. Companies have begun to look for solutions that would help reduce their infrastructures costs and improve profitability. Cloud computing is becoming a foundation for benefits well beyond IT cost savings. Yet, many business leaders are concerned about cloud security, privacy, availability, and data protection. To discuss and address these issues, we invite researches who focus on cloud computing to shed more light on this emerging field. This peer-reviewed open access Journal aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, including (but not limited to)
International Journal on Cloud Computing: Services and Architecture (IJCCSA)ijccsa
Cloud computing helps enterprises transform business and technology. Companies have begun to look for solutions that would help reduce their infrastructures costs and improve profitability. Cloud computing is becoming a foundation for benefits well beyond IT cost savings. Yet, many business leaders are concerned about cloud security, privacy, availability, and data protection. To discuss and address these issues, we invite researches who focus on cloud computing to shed more light on this emerging field. This peer-reviewed open access Journal aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, including (but not limited to):
Semelhante a The Optimizing Information Leakage in Multicloud Storage Services (20)
‘Six Sigma Technique’ A Journey Through its Implementationijtsrd
The manufacturing industries all over the world are facing tough challenges for growth, development and sustainability in today’s competitive environment. They have to achieve apex position by adapting with the global competitive environment by delivering goods and services at low cost, prime quality and better price to increase wealth and consumer satisfaction. Cost Management ensures profit, growth and sustainability of the business with implementation of Continuous Improvement Technique like Six Sigma. This leads to optimize Business performance. The method drives for customer satisfaction, low variation, reduction in waste and cycle time resulting into a competitive advantage over other industries which did not implement it. The main objective of this paper ‘Six Sigma Technique A Journey Through Its Implementation’ is to conceptualize the effectiveness of Six Sigma Technique through the journey of its implementation. Aditi Sunilkumar Ghosalkar "‘Six Sigma Technique’: A Journey Through its Implementation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64546.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64546/‘six-sigma-technique’-a-journey-through-its-implementation/aditi-sunilkumar-ghosalkar
Edge Computing in Space Enhancing Data Processing and Communication for Space...ijtsrd
Edge computing, a paradigm that involves processing data closer to its source, has gained significant attention for its potential to revolutionize data processing and communication in space missions. With the increasing complexity and data volume generated by modern space missions, traditional centralized computing approaches face challenges related to latency, bandwidth, and security. Edge computing in space, involving on board processing and analysis of data, offers promising solutions to these challenges. This paper explores the concept of edge computing in space, its benefits, applications, and future prospects in enhancing space missions. Manish Verma "Edge Computing in Space: Enhancing Data Processing and Communication for Space Missions" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64541.pdf Paper Url: https://www.ijtsrd.com/computer-science/artificial-intelligence/64541/edge-computing-in-space-enhancing-data-processing-and-communication-for-space-missions/manish-verma
Dynamics of Communal Politics in 21st Century India Challenges and Prospectsijtsrd
Communal politics in India has evolved through centuries, weaving a complex tapestry shaped by historical legacies, colonial influences, and contemporary socio political transformations. This research comprehensively examines the dynamics of communal politics in 21st century India, emphasizing its historical roots, socio political dynamics, economic implications, challenges, and prospects for mitigation. The historical perspective unravels the intricate interplay of religious identities and power dynamics from ancient civilizations to the impact of colonial rule, providing insights into the evolution of communalism. The socio political dynamics section delves into the contemporary manifestations, exploring the roles of identity politics, socio economic disparities, and globalization. The economic implications section highlights how communal politics intersects with economic issues, perpetuating disparities and influencing resource allocation. Challenges posed by communal politics are scrutinized, revealing multifaceted issues ranging from social fragmentation to threats against democratic values. The prospects for mitigation present a multifaceted approach, incorporating policy interventions, community engagement, and educational initiatives. The paper conducts a comparative analysis with international examples, identifying common patterns such as identity politics and economic disparities. It also examines unique challenges, emphasizing Indias diverse religious landscape, historical legacy, and secular framework. Lessons for effective strategies are drawn from international experiences, offering insights into inclusive policies, interfaith dialogue, media regulation, and global cooperation. By scrutinizing historical epochs, contemporary dynamics, economic implications, and international comparisons, this research provides a comprehensive understanding of communal politics in India. The proposed strategies for mitigation underscore the importance of a holistic approach to foster social harmony, inclusivity, and democratic values. Rose Hossain "Dynamics of Communal Politics in 21st Century India: Challenges and Prospects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64528.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/history/64528/dynamics-of-communal-politics-in-21st-century-india-challenges-and-prospects/rose-hossain
Assess Perspective and Knowledge of Healthcare Providers Towards Elehealth in...ijtsrd
Background and Objective Telehealth has become a well known tool for the delivery of health care in Saudi Arabia, and the perspective and knowledge of healthcare providers are influential in the implementation, adoption and advancement of the method. This systematic review was conducted to examine the current literature base regarding telehealth and the related healthcare professional perspective and knowledge in the Kingdom of Saudi Arabia. Materials and Methods This systematic review was conducted by searching 7 databases including, MEDLINE, CINHAL, Web of Science, Scopus, PubMed, PsycINFO, and ProQuest Central. Studies on healthcare practitioners telehealth knowledge and perspectives published in English in Saudi Arabia from 2000 to 2023 were included. Boland directed this comprehensive review. The researchers examined each connected study using the AXIS tool, which evaluates cross sectional systematic reviews. Narrative synthesis was used to summarise and convey the data. Results Out of 1840 search results, 10 studies were included. Positive outlook and limited knowledge among providers were seen across trials. Healthcare professionals like telehealth for its ability to improve quality, access, and delivery, save time and money, and be successful. Age, gender, occupation, and work experience also affect health workers knowledge. In Saudi Arabia, healthcare professionals face inadequate expert assistance, patient privacy, internet connection concerns, lack of training courses, lack of telehealth understanding, and high costs while performing telemedicine. Conclusions Healthcare practitioners telehealth perceptions and knowledge were examined in this systematic study. Its collection of concerned experts different personal attitudes and expertise would help enhance telehealths implementation in Saudi Arabia, develop its healthcare delivery alternative, and eliminate frequent problems. Badriah Mousa I Mulayhi | Dr. Jomin George | Judy Jenkins "Assess Perspective and Knowledge of Healthcare Providers Towards Elehealth in Saudi Arabia: A Systematic Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64535.pdf Paper Url: https://www.ijtsrd.com/medicine/other/64535/assess-perspective-and-knowledge-of-healthcare-providers-towards-elehealth-in-saudi-arabia-a-systematic-review/badriah-mousa-i-mulayhi
The Impact of Digital Media on the Decentralization of Power and the Erosion ...ijtsrd
The impact of digital media on the distribution of power and the weakening of traditional gatekeepers has gained considerable attention in recent years. The adoption of digital technologies and the internet has resulted in declining influence and power for traditional gatekeepers such as publishing houses and news organizations. Simultaneously, digital media has facilitated the emergence of new voices and players in the media industry. Digital medias impact on power decentralization and gatekeeper erosion is visible in several ways. One significant aspect is the democratization of information, which enables anyone with an internet connection to publish and share content globally, leading to citizen journalism and bypassing traditional gatekeepers. Another aspect is the disruption of conventional media industry business models, as traditional organizations struggle to adjust to the decrease in advertising revenue and the rise of digital platforms. Alternative business models, such as subscription models and crowdfunding, have become more prevalent, leading to the emergence of new players. Overall, the impact of digital media on the distribution of power and the weakening of traditional gatekeepers has brought about significant changes in the media landscape and the way information is shared. Further research is required to fully comprehend the implications of these changes and their impact on society. Dr. Kusum Lata "The Impact of Digital Media on the Decentralization of Power and the Erosion of Traditional Gatekeepers" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64544.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/political-science/64544/the-impact-of-digital-media-on-the-decentralization-of-power-and-the-erosion-of-traditional-gatekeepers/dr-kusum-lata
Online Voices, Offline Impact Ambedkars Ideals and Socio Political Inclusion ...ijtsrd
This research investigates the nexus between online discussions on Dr. B.R. Ambedkars ideals and their impact on social inclusion among college students in Gurugram, Haryana. Surveying 240 students from 12 government colleges, findings indicate that 65 actively engage in online discussions, with 80 demonstrating moderate to high awareness of Ambedkars ideals. Statistically significant correlations reveal that higher online engagement correlates with increased awareness p 0.05 and perceived social inclusion. Variations across colleges and a notable effect of college type on perceived social inclusion highlight the influence of contextual factors. Furthermore, the intersectional analysis underscores nuanced differences based on gender, caste, and socio economic status. Dr. Kusum Lata "Online Voices, Offline Impact: Ambedkar's Ideals and Socio-Political Inclusion - A Study of Gurugram District" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64543.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/political-science/64543/online-voices-offline-impact-ambedkars-ideals-and-sociopolitical-inclusion--a-study-of-gurugram-district/dr-kusum-lata
Problems and Challenges of Agro Entreprenurship A Studyijtsrd
Noting calls for contextualizing Agro entrepreneurs problems and challenges of the agro entrepreneurs and for greater attention to the Role of entrepreneurs in agro entrepreneurship research, we conduct a systematic literature review of extent research in agriculture entrepreneurship to overcome the study objectives of complications of agro entrepreneurs through various factors, Development of agriculture products is a key factor for the overall economic growth of agro entrepreneurs Agro Entrepreneurs produces firsthand large scale employment, utilizes the labor and natural resources, This research outlines the problems of Weather and Soil Erosions, Market price fluctuation, stimulates labor cost problems, reduces concentration of Price volatility, Dependency on Intermediaries, induces Limited Bargaining Power, and Storage and Transportation Costs. This paper mainly devoted to highlight Problems and challenges faced for the sustainable of Agro Entrepreneurs in India. Vinay Prasad B "Problems and Challenges of Agro Entreprenurship - A Study" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64540.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64540/problems-and-challenges-of-agro-entreprenurship--a-study/vinay-prasad-b
Comparative Analysis of Total Corporate Disclosure of Selected IT Companies o...ijtsrd
Disclosure is a process through which a business enterprise communicates with external parties. A corporate disclosure is communication of financial and non financial information of the activities of a business enterprise to the interested entities. Corporate disclosure is done through publishing annual reports. So corporate disclosure through annual reports plays a vital role in the life of all the companies and provides valuable information to investors. The basic objectives of corporate disclosure is to give a true and fair view of companies to the parties related either directly or indirectly like owner, government, creditors, shareholders etc. in the companies act, provisions have been made about mandatory and voluntary disclosure. The IT sector in India is rapidly growing, the trend to invest in the IT sector is rising and employment opportunities in IT sectors are also increasing. Therefore the IT sector is expected to have fair, full and adequate disclosure of all information. Unfair and incomplete disclosure may adversely affect the entire economy. A research study on disclosure practices of IT companies could play an important role in this regard. Hence, the present research study has been done to study and review comparative analysis of total corporate disclosure of selected IT companies of India and to put forward overall findings and suggestions with a view to increase disclosure score of these companies. The researcher hopes that the present research study will be helpful to all selected Companies for improving level of corporate disclosure through annual reports as well as the government, creditors, investors, all business organizations and upcoming researcher for comparative analyses of level of corporate disclosure with special reference to selected IT companies. Dr. Vaibhavi D. Thaker "Comparative Analysis of Total Corporate Disclosure of Selected IT Companies of India" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64539.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64539/comparative-analysis-of-total-corporate-disclosure-of-selected-it-companies-of-india/dr-vaibhavi-d-thaker
The Impact of Educational Background and Professional Training on Human Right...ijtsrd
This study investigated the impact of educational background and professional training on human rights awareness among secondary school teachers in the Marathwada region of Maharashtra, India. The key findings reveal that higher levels of education, particularly a master’s degree, and fields of study related to education, humanities, or social sciences are associated with greater human rights awareness among teachers. Additionally, both pre service teacher training and in service professional development programs focused on human rights education significantly enhance teacher’s knowledge, skills, and competencies in promoting human rights principles in their classrooms. Baig Ameer Bee Mirza Abdul Aziz | Dr. Syed Azaz Ali Amjad Ali "The Impact of Educational Background and Professional Training on Human Rights Awareness among Secondary School Teachers" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64529.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/64529/the-impact-of-educational-background-and-professional-training-on-human-rights-awareness-among-secondary-school-teachers/baig-ameer-bee-mirza-abdul-aziz
A Study on the Effective Teaching Learning Process in English Curriculum at t...ijtsrd
“One Language sets you in a corridor for life. Two languages open every door along the way” Frank Smith English as a foreign language or as a second language has been ruling in India since the period of Lord Macaulay. But the question is how much we teach or learn English properly in our culture. Is there any scope to use English as a language rather than a subject How much we learn or teach English without any interference of mother language specially in the classroom teaching learning scenario in West Bengal By considering all these issues the researcher has attempted in this article to focus on the effective teaching learning process comparing to other traditional strategies in the field of English curriculum at the secondary level to investigate whether they fulfill the present teaching learning requirements or not by examining the validity of the present curriculum of English. The purpose of this study is to focus on the effectiveness of the systematic, scientific, sequential and logical transaction of the course between the teachers and the learners in the perspective of the 5Es programme that is engage, explore, explain, extend and evaluate. Sanchali Mondal | Santinath Sarkar "A Study on the Effective Teaching Learning Process in English Curriculum at the Secondary Level of West Bengal" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd62412.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/62412/a-study-on-the-effective-teaching-learning-process-in-english-curriculum-at-the-secondary-level-of-west-bengal/sanchali-mondal
The Role of Mentoring and Its Influence on the Effectiveness of the Teaching ...ijtsrd
This paper reports on a study which was conducted to investigate the role of mentoring and its influence on the effectiveness of the teaching of Physics in secondary schools in the South West Region of Cameroon. The study adopted the convergent parallel mixed methods design, focusing on respondents in secondary schools in the South West Region of Cameroon. Both quantitative and qualitative data were collected, analysed separately, and the results were compared to see if the findings confirm or disconfirm each other. The quantitative analysis found that majority of the respondents 72 of Physics teachers affirmed that they had more experienced colleagues as mentors to help build their confidence, improve their teaching, and help them improve their effectiveness and efficiency in guiding learners’ achievements. Only 28 of the respondents disagreed with these statements. With majority respondents 72 agreeing with the statements, it implies that in most secondary schools, experienced Physics teachers act as mentors to build teachers’ confidence in teaching and improving students’ learning. The interview qualitative data analysis summarized how secondary school Principals use meetings with mentors and mentees to promote mentorship in the school milieu. This has helped strengthen teachers’ classroom practices in secondary schools in the South West Region of Cameroon. With the results confirming each other, the study recommends that mentoring should focus on helping teachers employ social interactions and instructional practices feedback and clarity in teaching that have direct measurable impact on students’ learning achievements. Andrew Ngeim Sumba | Frederick Ebot Ashu | Peter Agborbechem Tambi "The Role of Mentoring and Its Influence on the Effectiveness of the Teaching of Physics in Secondary Schools in the South West Region of Cameroon" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64524.pdf Paper Url: https://www.ijtsrd.com/management/management-development/64524/the-role-of-mentoring-and-its-influence-on-the-effectiveness-of-the-teaching-of-physics-in-secondary-schools-in-the-south-west-region-of-cameroon/andrew-ngeim-sumba
Design Simulation and Hardware Construction of an Arduino Microcontroller Bas...ijtsrd
This study primarily focuses on the design of a high side buck converter using an Arduino microcontroller. The converter is specifically intended for use in DC DC applications, particularly in standalone solar PV systems where the PV output voltage exceeds the load or battery voltage. To evaluate the performance of the converter, simulation experiments are conducted using Proteus Software. These simulations provide insights into the input and output voltages, currents, powers, and efficiency under different state of charge SoC conditions of a 12V,70Ah rechargeable lead acid battery. Additionally, the hardware design of the converter is implemented, and practical data is collected through operation, monitoring, and recording. By comparing the simulation results with the practical results, the efficiency and performance of the designed converter are assessed. The findings indicate that while the buck converter is suitable for practical use in standalone PV systems, its efficiency is compromised due to a lower output current. Chan Myae Aung | Dr. Ei Mon "Design Simulation and Hardware Construction of an Arduino-Microcontroller Based DC-DC High-Side Buck Converter for Standalone PV System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64518.pdf Paper Url: https://www.ijtsrd.com/engineering/mechanical-engineering/64518/design-simulation-and-hardware-construction-of-an-arduinomicrocontroller-based-dcdc-highside-buck-converter-for-standalone-pv-system/chan-myae-aung
Sustainable Energy by Paul A. Adekunte | Matthew N. O. Sadiku | Janet O. Sadikuijtsrd
Energy becomes sustainable if it meets the needs of the present without compromising the ability of future generations to meet their own needs. Some of the definitions of sustainable energy include the considerations of environmental aspects such as greenhouse gas emissions, social, and economic aspects such as energy poverty. Generally far more sustainable than fossil fuel are renewable energy sources such as wind, hydroelectric power, solar, and geothermal energy sources. Worthy of note is that some renewable energy projects, like the clearing of forests to produce biofuels, can cause severe environmental damage. The sustainability of nuclear power which is a low carbon source is highly debated because of concerns about radioactive waste, nuclear proliferation, and accidents. The switching from coal to natural gas has environmental benefits, including a lower climate impact, but could lead to delay in switching to more sustainable options. “Carbon capture and storage” can be built into power plants to remove the carbon dioxide CO2 emissions, but this technology is expensive and has rarely been implemented. Leading non renewable energy sources around the world is fossil fuels, coal, petroleum, and natural gas. Nuclear energy is usually considered another non renewable energy source, although nuclear energy itself is a renewable energy source, but the material used in nuclear power plants is not. The paper addresses the issue of sustainable energy, its attendant benefits to the future generation, and humanity in general. Paul A. Adekunte | Matthew N. O. Sadiku | Janet O. Sadiku "Sustainable Energy" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64534.pdf Paper Url: https://www.ijtsrd.com/engineering/electrical-engineering/64534/sustainable-energy/paul-a-adekunte
Concepts for Sudan Survey Act Implementations Executive Regulations and Stand...ijtsrd
This paper aims to outline the executive regulations, survey standards, and specifications required for the implementation of the Sudan Survey Act, and for regulating and organizing all surveying work activities in Sudan. The act has been discussed for more than 5 years. The Land Survey Act was initiated by the Sudan Survey Authority and all official legislations were headed by the Sudan Ministry of Justice till it was issued in 2022. The paper presents conceptual guidelines to be used for the Survey Act implementation and to regulate the survey work practice, standardizing the field surveys, processing, quality control, procedures, and the processes related to survey work carried out by the stakeholders and relevant authorities in Sudan. The conceptual guidelines are meant to improve the quality and harmonization of geospatial data and to aid decision making processes as well as geospatial information systems. The established comprehensive executive regulations will govern and regulate the implementation of the Sudan Survey Geomatics Act in all surveying and mapping practices undertaken by the Sudan Survey Authority SSA and state local survey departments for public or private sector organizations. The targeted standards and specifications include the reference frame, projection, coordinate systems, and the guidelines and specifications that must be followed in the field of survey work, processes, and mapping products. In the last few decades, there has been a growing awareness of the importance of geomatics activities and measurements on the Earths surface in space and time, together with observing and mapping the changes. In such cases, data must be captured promptly, standardized, and obtained with more accuracy and specified in much detail. The paper will also highlight the current situation in Sudan, the degree to which survey standards are used, the problems encountered, and the errors that arise from not using the standards and survey specifications. Kamal A. A. Sami "Concepts for Sudan Survey Act Implementations - Executive Regulations and Standards" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63484.pdf Paper Url: https://www.ijtsrd.com/engineering/civil-engineering/63484/concepts-for-sudan-survey-act-implementations--executive-regulations-and-standards/kamal-a-a-sami
Towards the Implementation of the Sudan Interpolated Geoid Model Khartoum Sta...ijtsrd
The discussions between ellipsoid and geoid have invoked many researchers during the recent decades, especially during the GNSS technology era, which had witnessed a great deal of development but still geoid undulation requires more investigations. To figure out a solution for Sudans local geoid, this research has tried to intake the possibility of determining the geoid model by following two approaches, gravimetric and geometrical geoid model determination, by making use of GNSS leveling benchmarks at Khartoum state. The Benchmarks are well distributed in the study area, in which, the horizontal coordinates and the height above the ellipsoid have been observed by GNSS while orthometric heights were carried out using precise leveling. The Global Geopotential Model GGM represented in EGM2008 has been exploited to figure out the geoid undulation at the benchmarks in the study area. This is followed by a fitting process, that has been done to suit the geoid undulation data which has been computed using GNSS leveling data and geoid undulation inspired by the EGM2008. Two geoid surfaces were created after the fitting process to ensure that they are identical and both of them could be counted for getting the same geoid undulation with an acceptable accuracy. In this respect, statistical operation played an important role in ensuring the consistency and integrity of the model by applying cross validation techniques splitting the data into training and testing datasets for building the geoid model and testing its eligibility. The geometrical solution for geoid undulation computation has been utilized by applying straightforward equations that facilitate the calculation of the geoid undulation directly through applying statistical techniques for the GNSS leveling data of the study area to get the common equation parameters values that could be utilized to calculate geoid undulation of any position in the study area within the claimed accuracy. Both systems were checked and proved eligible to be used within the study area with acceptable accuracy which may contribute to solving the geoid undulation problem in the Khartoum area, and be further generalized to determine the geoid model over the entire country, and this could be considered in the future, for regional and continental geoid model. Ahmed M. A. Mohammed. | Kamal A. A. Sami "Towards the Implementation of the Sudan Interpolated Geoid Model (Khartoum State Case Study)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63483.pdf Paper Url: https://www.ijtsrd.com/engineering/civil-engineering/63483/towards-the-implementation-of-the-sudan-interpolated-geoid-model-khartoum-state-case-study/ahmed-m-a-mohammed
Activating Geospatial Information for Sudans Sustainable Investment Mapijtsrd
Sudan is witnessing an acceleration in the processes of development and transformation in the performance of government institutions to raise the productivity and investment efficiency of the government sector. The development plans and investment opportunities have focused on achieving national goals in various sectors. This paper aims to illuminate the path to the future and provide geospatial data and information to develop the investment climate and environment for all sized businesses, and to bridge the development gap between the Sudan states. The Sudan Survey Authority SSA is the main advisor to the Sudan Government in conducting surveying, mappings, designing, and developing systems related to geospatial data and information. In recent years, SSA made a strategic partnership with the Ministry of Investment to activate Geospatial Information for Sudans Sustainable Investment and in particular, for the preparation and implementation of the Sudan investment map, based on the directives and objectives of the Ministry of Investment MI in Sudan. This paper comes within the framework of activating the efforts of the Ministry of Investment to develop technical investment services by applying techniques adopted by the Ministry and its strategic partners for advancing investment processes in the country. Kamal A. A. Sami "Activating Geospatial Information for Sudan's Sustainable Investment Map" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63482.pdf Paper Url: https://www.ijtsrd.com/engineering/information-technology/63482/activating-geospatial-information-for-sudans-sustainable-investment-map/kamal-a-a-sami
Educational Unity Embracing Diversity for a Stronger Societyijtsrd
In a rapidly changing global landscape, the importance of education as a unifying force cannot be overstated. This paper explores the crucial role of educational unity in fostering a stronger and more inclusive society through the embrace of diversity. By examining the benefits of diverse learning environments, the paper aims to highlight the positive impact on societal strength. The discussion encompasses various dimensions, from curriculum design to classroom dynamics, and emphasizes the need for educational institutions to become catalysts for unity in diversity. It highlights the need for a paradigm shift in educational policies, curricula, and pedagogical approaches to ensure that they are reflective of the diverse fabric of society. This paper also addresses the challenges associated with implementing inclusive educational practices and offers practical strategies for overcoming barriers. It advocates for collaborative efforts between educational institutions, policymakers, and communities to create a supportive ecosystem that promotes diversity and unity. Mr. Amit Adhikari | Madhumita Teli | Gopal Adhikari "Educational Unity: Embracing Diversity for a Stronger Society" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64525.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/64525/educational-unity-embracing-diversity-for-a-stronger-society/mr-amit-adhikari
Integration of Indian Indigenous Knowledge System in Management Prospects and...ijtsrd
The diversity of indigenous knowledge systems in India is vast and can vary significantly between different communities and regions. Preserving and respecting these knowledge systems is crucial for maintaining cultural heritage, promoting sustainable practices, and fostering cross cultural understanding. In this paper, an overview of the prospects and challenges associated with incorporating Indian indigenous knowledge into management is explored. It is found that IIKS helps in management in many areas like sustainable development, tourism, food security, natural resource management, cultural preservation and innovation, etc. However, IIKS integration with management faces some challenges in the form of a lack of documentation, cultural sensitivity, language barriers legal framework, etc. Savita Lathwal "Integration of Indian Indigenous Knowledge System in Management: Prospects and Challenges" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63500.pdf Paper Url: https://www.ijtsrd.com/management/accounting-and-finance/63500/integration-of-indian-indigenous-knowledge-system-in-management-prospects-and-challenges/savita-lathwal
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...ijtsrd
The COVID 19 pandemic has highlighted the crucial need of preventive measures, with widespread use of face masks being a key method for slowing the viruss spread. This research investigates face mask identification using deep learning as a technological solution to be reducing the risk of coronavirus transmission. The proposed method uses state of the art convolutional neural networks CNNs and transfer learning to automatically recognize persons who are not wearing masks in a variety of circumstances. We discuss how this strategy improves public health and safety by providing an efficient manner of enforcing mask wearing standards. The report also discusses the obstacles, ethical concerns, and prospective applications of face mask detection systems in the ongoing fight against the pandemic. Dilip Kumar Sharma | Aaditya Yadav "DeepMask: Transforming Face Mask Identification for Better Pandemic Control in the COVID-19 Era" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64522.pdf Paper Url: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/64522/deepmask-transforming-face-mask-identification-for-better-pandemic-control-in-the-covid19-era/dilip-kumar-sharma
Streamlining Data Collection eCRF Design and Machine Learningijtsrd
Efficient and accurate data collection is paramount in clinical trials, and the design of Electronic Case Report Forms eCRFs plays a pivotal role in streamlining this process. This paper explores the integration of machine learning techniques in the design and implementation of eCRFs to enhance data collection efficiency. We delve into the synergies between eCRF design principles and machine learning algorithms, aiming to optimize data quality, reduce errors, and expedite the overall data collection process. The application of machine learning in eCRF design brings forth innovative approaches to data validation, anomaly detection, and real time adaptability. This paper discusses the benefits, challenges, and future prospects of leveraging machine learning in eCRF design for streamlined and advanced data collection in clinical trials. Dhanalakshmi D | Vijaya Lakshmi Kannareddy "Streamlining Data Collection: eCRF Design and Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63515.pdf Paper Url: https://www.ijtsrd.com/biological-science/biotechnology/63515/streamlining-data-collection-ecrf-design-and-machine-learning/dhanalakshmi-d
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
2. International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD47948 | Volume – 6 | Issue – 1 | Nov-Dec 2021 Page 929
Figure 1: The motivating example
1.1. Cloud Computing
The usage of figuring resources (hardware and
programming)that are passed on as an assistance over
an association (routinely the Internet).
The name comes from the ordinary usage of a
cloud-shaped picture as a reflection for the
muddled establishment it contains in system
traces. Disseminated figuring supplies far off
organizations with a customer's data,
programming and estimation.
Dispersed registering contains hardware and
programming resources made open the Internet as
regulated outcast organizations. These
organizations regularly givepermission to state of
the art programming applications and best in class
associations ofserver PCs
2. RELTED WORK
Getting information has for a long while been a huge
issue. We should safeguard ourselves from the risk of
disaster: ponder the library of Alexandria; and from
unapproved access: consider the genuine business of
the 'Humiliation Sheets', returning many years. This
has never been more clear than today when monstrous
measures of data (dare one say lesser measures of
information) are taken care of on PC structures, and
consistently moved around the Internet, at essentially
no cost.
The extending universality of dispersed stockpiling
organizations has lead associations that handle
essential data to examine using these organizations
for their ability needs. Clinical record informational
collections, colossal biomedical datasets, chronicled
information about power structures and financial data
are a couple of cases of fundamental data that could
be moved to the cloud. Nevertheless, the relentless
quality and security of data set aside in the cloud
really stay primary issues.
A developing measure of information is delivered day
by day bringing about a developing interest for
capacity arrangements. While distributed storage
suppliers offer a practically endless capacity limit,
information proprietors look for topographical and
supplier variety in information position, to keep away
from seller lock-in and to expand accessibility and
solidness. Additionally, contingent upon the client
information access design, a specific cloud supplier
might be less expensive than another.
3. SYSTEMANALYSIS
The notion on diminishing facts spillage to each
character CSP in a multicloud setting away
construction and give contraptions to dispersing
clients statistics over exceptional CSPs in a spillage
cautious way. First we give a eager evaluation to
conveying similarity making sure engraves for
records ties. Next dependent upon this evaluation, we
devise a bit role amassing plan that proficiently
synchronizes equal projections together in a
multicloud climate.
We present StoreSim, a records spillage careful
multicloud collecting shape which joins three
considerable surrounded components and we
additionally element records spillage overhaul
difficulty in multicloud.
We recommend a everyday calculation,
BFSMinHash, considering Minhash and Bloom
channel to make closeness ensuring engraves for data
anomalies. We in like way plan a pairwise records
spillage paintings subject to Jaccard likeness.
3.1. ADVANTAGES OF PROPOSED SYSTEM
We show the adequacy and proficiency of our
proposed plot for diminishing records spillage all
through numerous mists. Besides, our
investigation on the machine attackability shows
that StoreSim makes assaults on records parts
more noteworthy complex.
To the excellent of our ability, this is the essential
works of art which applies close reproduction
systems for halting measurements spillage in
multicloud carport administrations. Our canvases
centers around the records spillage enhancement
for capacity transporter in a multicloud climate
through taking advantage of realities closeness
because of the synchronization of changed
information.
3.2. ALGORITHMS
A. MINIMUM HASH:
MinHash [10, 11] usages hashing to quickly evaluate
the Jaccard comparability of two sets which can be
moreover interpreted as "the probability that a
subjective part from the relationship of two sets is
similarly in their intermingling", Prob[min(h(S1)) =
3. International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD47948 | Volume – 6 | Issue – 1 | Nov-Dec 2021 Page 930
In which h is the 1 2 free hash potential and
min(h(S1)) gives the base worth of h(x), x ∈ S1. In
this manner, we are able to select a succession of hash
capacities h1,h2,··· , hk and sign in the bottom
upsides of every hash work as MinHash marks, i.E.,
Sig(S) = min(helloi = 1,··· ,k. It follows that Jaccard
likeness of two units/ok. Be that as it may, MinHash
with many hash works desires to discern the effects of
various hash capacities for each character from every
set, which is computationally high priced. In our
paper, we include a variant of Minhash which keeps
away from the weighty calculation by means of
making use of only a solitary hash paintings. Rather
than selecting just a solitary least really worth for
every hash work, the mark of MinHash with
unmarried hash paintings h will pick out the ok littlest
features from the set h(S), that's indicated as Sig(S) =
mink(h(S)). Hence, an arbitrary instance of S1 ∪ S2
may be addressed as X = mink (h(S1 ∪ S2)) =
mink(Sig(S1) ∪ Sig(S2)). The Jaccard similitude is
classified X ∩Sig(S1)∩Sig(S2)k.
B. SHA1:
SHA-1 or Secure Hash Algorithm 1 is a
cryptographic hash trademark which takes an enter
and creates a 160-digit (20-byte) hash cost. This hash
cost is known as a message digest. This message
digest is for the most part then, at that point, delivered
as a hexadecimal reach that is forty digits in length. It
is a U.S. Government Information Processing
Standard and became planned by utilizing the US
National Security Agency.SHA-1 is currently thought
about uncertain in light of the fact that 2005.
Significant tech monsters programs like Microsoft,
Google, Apple and Mozilla have quit tolerating SHA-
1 SSL testament through 2017. These calculations are
instated in static procedure known as get Instance().
In the wake of picking the arrangement of rules the
message digest cost is determined and the results are
back as a byte exhibit. Enormous Integer tastefulness
is utilized, to change over the resulting byte exhibit
into its signum portrayal. This delineation is then
changed squarely into a hexadecimal design to
receive the normal Message Digest.
C. Advanced Enctyption Standard(AES):
The more well known and generally embraced
symmetric encryption calculation prone to be
experienced these days is the Advanced Encryption
Standard (AES). It is figured out something like six
opportunity quicker than triple DES.
A swap for DES was required as its key size was
excessively little. With expanding processing power,
it was thought of as helpless against thorough key
hunt assault. Triple DES was intended to defeat this
disadvantage yet it was seen as lethargic.
The elements of AES are as per the following –
Symmetric key symmetric block cipher
128-bit data, 128/192/256-bit keys
Stronger and faster than Triple-DES
Provide full specification and design details
Software implementable in C and Java
4. FEASIBILITY STUDY
The reach ability of the assignment is researched in
this stage and vital understanding is progressed with
an amazingly wide course of action for the endeavor
and a few statements. During system assessment the
common sense examination of the proposed structure
is to be finished. This is to ensure that the proposed
system isn't a load to the association. For credibility
examination, some appreciation of the critical
requirements for the structure is major.
Three key thoughts related with the attainability
examination are
Affordable FEASIBILITY
Specialized FEASIBILITY
SOCIAL FEASIBILITY
4.1. ECONOMICAL FEASIBILITY:
“This review is cultivated to test the monetary impact
that the gadget will have on the business. The amount
of asset that the association can fill the examinations
and improvement of the gadget is limited. The costs
ought to be defended. Along these lines the created
machine too inside the accounts and this was done in
light of the fact that the greater part of the
advancements utilized are uninhibitedly to be had.
Just the redid stock must be bought.”
4.2. TECHNICAL FEASIBILITY:
“This analyze is done to test the specialized
achievability, that is, the specialized necessities of the
framework. Any contraption progressed should
presently don't have an unnecessary interest on the to
be had specialized assets. This will bring about high
requests on the accessible specialized sources. This
will bring about inordinate requests being situated at
the benefactor. The high-level gadget ought to have a
humble prerequisite, as best insignificant or invalid
alterations are needed for forcing this device.”
4.3. SOCIAL FEASIBILITY:
“The part of view is to actually take a look at the
degree of notoriety of the contraption through the
individual. This comprises of the way of schooling
the purchaser to apply the gadget productively.”
5. SYSTEMIDESIGN”
The enter configuration is the connection among the
measurements machine and the purchaser. It involves
the developing particular and procedures for records
preparing and individuals steps are vital for place
exchange measurements in to a usable structure for
4. International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD47948 | Volume – 6 | Issue – 1 | Nov-Dec 2021 Page 931
handling might be done through breaking down the
PC to peruse insights from a composed or uncovered
report or it could happen through having individuals
entering the information straightforwardly into the
machine.” The design “of info centers around
controlling the measure of information required,
controlling the blunders, taking off put off, avoiding
additional means and keeping up with the way simple
Input Design thought about the accompanying issues:
What measurements should take conveyance of as
info?
How the realities should be coordinated or coded?
The discourse to manual the running staff in
presenting input.
Techniques for preparing input approvals and steps to
follow while botches emerge
5.1. SYSTEM ARCHITECTURE
5.2. UML DIAGRAMS”
UML stands for Unified Modeling Language. UML is
a standardized general-motive modeling language
within the area of object-oriented software program
engineering. The standard is controlled, and became
created by using, the Object Management Group.
The intention is for UML to grow to be a not unusual
language for growing models of object-oriented pc
software program. In its modern shape UML is
created from two most important components: a
Meta-version and a notation. In the future, a few
shape of method or procedure may also be added to;
or related to, UML.
The Unified Modeling Language is a fashionable
language for specifying, Visualization, Constructing
and documenting the artifacts of software program
machine, in addition to for business modeling and
other non-software program systems.
The UML represents a set of nice engineering
practices that have proven a hit within the modeling
of massive and complicated systems.
The UML is a completely crucial a part of growing
objects-oriented software and the software program
improvement process. The UML makes use of
normally graphical notations to specific the layout of
software program challenge
5.3. USE CASE DIAGRAM:
A use case diagram within the Unified Modeling
Language (UML) is a sort of behavioral diagram
described by and created from a Use-case analysis. Its
reason is to give a graphical overview of the
functionality supplied through a machine in terms of
actors, their desires (represented as use cases), and
any dependencies among the ones use cases. The
main reason of a use case diagram is to show what
machine features are finished for which actor. Roles
of the actors inside the device may be depicted.
Meta Data
Server
View Cloud Files
View User Details
Verify Hash Similarity
View Download Details
Logout
View Uploaded Files
Upload File
Split File into Multiple Parts
Select File
Download Files
Update File Parts
Login
5.4. CLASS DIAGRAM:
In computer programming, a class graph inside the
Unified Modeling Language (UML) is a kind of static
shape chart that depicts the state of a framework with
the guide of showing the gadget's illustrations, their
characteristics, activities (or methods), and the
connections among the guidelines. It clarifies which
heavenliness incorporates realities.”
5. International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD47948 | Volume – 6 | Issue – 1 | Nov-Dec 2021 Page 932
5.5. SEQUENCE DIAGRAM:
A series graph in Unified Modeling Language (UML)
is a sort of connection chart that recommends how
strategies work with each other and in what request. It
is a develop of a Message Sequence Chart.
Succession graphs are sometimes known as occasion
charts, occasion possibilities, and timing outlines.”
5.6. ACTIVITY DIAGRAM:
Action outlines are graphical portrayals of work
processes of stepwise games and moves with help for
want, cycle and simultaneousness. In the Unified
Modeling Language, side interest outlines can be
utilized to clarify the business endeavor and
functional advance with the guide of-step work
processes of added substances in a machine. A
movement graph shows the generally speaking float
along with control.”
5.7. “ER Diagram:”
6. SYSTEM TESTING”
The cause for looking at is to locate mistakes. Testing
is the manner of seeking to find out each manageable
shortcoming or flimsy part in a bit item. It offers a
manner to definitely study the potential of delivered
materials, sub congregations, gatherings and
moreover a finished object It is the method for
practice programming with the reasoning of making
sure that the Software device lives as much as its
stipulations and patron desires and does currently do
not fall flat in an inadmissible manner. There are
diverse kinds of take a look at. Each check out kind
addresses a chose searching at necessity.
6.1. Unit testing
Unit trying out includes the plan of experiments that
approve that the inner program rationale is operating
accurately, and that application inputs produce
authentic consequences. All choice branches and
inward code waft must be installation. It is the giving
a shot of man or woman programming application
gadgets of the software. It is finished after the
delegated surprise of a person or female unit sooner
than coordination. This is an underlying trying out,
that relies upon on information on its improvement
and is obvious. Unit checks whole essential
assessments at issue level and inspect a selected
undertaking framework, utility, or doubtlessly
framework layout. Unit tests make certain that each
unique heading of a undertaking method performs
efficaciously to the recorded details and contains
surely characterised inputs and anticipated results.
6.2. Integration testing”
Mix checks are supposed to check consolidated
programming software components to pick in the
event that they in reality run as one program. Testing
is occasion pushed and is more concerned
approximately the critical end result of displays or
fields. Coordination assessments show that no matter
6. International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD47948 | Volume – 6 | Issue – 1 | Nov-Dec 2021 Page 933
the truth that the brought materials were as some
distance because it subjects for me allure, as validated
thru successfully unit searching at, the mixture of
brought substances is right and customary.
Coordination checking out is chiefly pointed in the
direction of uncovering the issues that ascent up from
the combo of parts.
7. CONCLUSION
Scattering statistics on more than one fogs gives
customers with a grand confirmation of statistics
spillage control in that no single cloud organization
realizes basically the person's records usual. In any
case, improvised appointment of estimations portions
can achieve avoidable actual elements spillage. We
display that meting out facts protuberances in an
agreeable way can supply individual's facts as
excessive as eighty% of the entire facts with the
extension in the percentage of real factors
synchronization.” To “smooth out the records
spillage, we organized the StoreSim, a real elements
spillage cautious restriction tool inside the
multicloud. StoreSim achieves this manner of
wondering via method for using novel estimations,
BFSMinHash and SPClustering, which location the
statistics with least data spillage (thinking about
closeness) on a similar cloud. Through a targeted
assessment reliant upon certifiable datasets, we
display that StoreSim is each viable and green (in
articulations of time and storage place) in restricting
data spillage subsequently or another of the technique
for synchronization in multicloud.” We “show that
our StoreSim can acquire close to typically suit
regular execution and reduce data spillage as an ugly
part as 60% at the same time as diverged from
unconstrained situation. Finally, via our assault
restriction exam, we similarly show that StoreSim
now not simplest decreases the shot at rebate
estimations spillage at any charge moreover makes
attacks on retail real factors allocates complicated.
REFERENCES
[1] J. Crowcroft, “On the duality of resilience and
privacy,” in Proceedings of the Royal Society
of London A: Mathematical, Physical and
Engineering Sciences, vol. 471, no. 2175. The
Royal Society, 2015, p. 20140862.
[2] A. Bessani, M. Correia, B. Quaresma, F.
Andr´e, and P. Sousa, “Depsky: dependable and
secure storage in a cloud-of-clouds,” ACM
Transactions on Storage (TOS), vol. 9, no. 4, p.
12, 2013.
[3] [H. Chen, Y. Hu, P. Lee, and Y. Tang,
“Nccloud: A network-coding-based storage
system in a cloud-of-clouds,” 2013.
[4] T. G. Papaioannou, N. Bonvin, and K. Aberer,
“Scalia: an adaptive scheme for efficient multi-
cloud storage,” in Proceedings of the
International Conference on High Performance
Computing, Networking, Storage and Analysis.
IEEE Computer Society Press, 2012, p. 20.
[5] Z. Wu, M. Butkiewicz, D. Perkins, E. Katz-
Bassett, and H. V. Madhyastha, “Spanstore:
Cost-effective geo-replicated storage spanning
multiple cloud services,” in Proceedings of the
Twenty-Fourth ACM Symposium on Operating
Systems Principles. ACM, 2013, pp. 292–308
[6] T. Suel and N. Memon, “Algorithms for delta
compression and remote file synchronization,”
2002.
[7] I. Drago, E. Bocchi, M. Mellia, H. Slatman, and
A. Pras, “Benchmarking personal cloud
storage,” in Proceedings of the 2013 conference
on Internet measurement conference. ACM,
2013, pp. 205–212.
[8] I. Drago, M. Mellia, M.MMunafo, A. Sperotto,
R. Sadre, and A. Pras, “Inside dropbox:
understanding personal cloud storage services,”
in Proceedings of the 2012 ACM conference on
Internet measurement conference. ACM, 2012,
pp. 481–494.
[9] U. Manber et al., “Finding similar files in a
large file system.” in Usenix Winter, vol. 94,
1994, pp. 1–10.
[10] P. Mahajan, S. Setty, S. Lee, A. Clement, L.
Alvisi, M. Dahlin, and M.Walfish, “Depot:
Cloud storage with minimal trust,” ACM
Transactions on Computer Systems (TOCS),
vol. 29, no. 4, p. 12, 2011.
[11] A. J. Feldman, W. P. Zeller, M. J. Freedman,
and E. W. Felten, “Sporc: Group collaboration
using untrusted cloud resources.” in OSDI, vol.
10, 2010, pp. 337–350.
[12] F. Dabek, M. F. Kaashoek, D. Karger, R.
Morris, and I. Stoica, “Wide-area cooperative
storage with cfs,” in ACM SIGOPS Operating
Systems Review, vol. 35, no. 5. ACM, 2001,
pp. 202–215.
[13] L. P. Cox and B. D. Noble, “Samsara: Honor
among thieves in peer-to-peer storage,” ACM
SIGOPS Operating Systems Review, vol. 37,
no. 5, pp. 120–132, 2003.
[14] H. Zhuang, R. Rahman, and K. Aberer,
“Decentralizing the cloud: How can small data
centers cooperate?” in Peer-to-Peer Computing
7. International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD47948 | Volume – 6 | Issue – 1 | Nov-Dec 2021 Page 934
(P2P), 14-th IEEE International Conference on.
Ieee, 2014, pp. 1–10.
[15] H. Zhuang, R. Rahman, P. Hui, and K. Aberer,
“Storesim: Optimizing information leakage in
multicloud storage services,” in Cloud
Computing Technology and Science
(CloudCom), 2015 IEEE 7th International
Conference on. IEEE, 2015, pp. 379–386.