The use of web services becomes important. Since web services may be provided by unknown providers, many researchers designed mechanisms to ensure secure access of web services. Existing mechanisms generally statically decides whether a requester can invoke a web service but ignore the importance of dynamically preventing information leakage during web service execution. This paper proposes a latticebased information flow control model WSIFC (web service information flow control) to dynamically prevent information leakage. It uses simple rules to monitor the flows of sensitive information during web service execution.
In an organization specifically as virtual as cloud there is need for access control systems to constrain
users direct or backhanded action that could lead to breach of security. In cloud, apart from owner access
to confidential data the third party auditing and accounting is done which could stir up further data leaks.
To control such data leaks and integrity, in past several security policies based on role, identity and user
attributes were proposed and found ineffective since they depend on static policies which do not monitor
data access and its origin. Provenance on the other hand tracks data usage and its origin which proves the
authenticity of data. To employ provenance in a real time system like cloud, the service provider needs to
store metadata on the subject of data alteration which is universally called as the Provenance Information.
This paper presents a provenance-policy based access control model which is designed and integrated with
the system that not only makes data auditable but also incorporates accountability for data alteration
events.
.Net projects 2011 by core ieeeprojects.com msudan92
The document contains summaries of 15 IEEE projects from 2011. Each project summary is 1-3 sentences describing the high level goal or problem addressed by the project. For example, one project proposes a policy enforcing mechanism to ensure fair communication in mobile ad hoc networks by regulating applications through proper communication policies. Another project presents a query formulation language called MashQL to easily query and fuse structured data from multiple sources on the web.
Evaluation of Authentication Mechanisms in Control Plane Applications for Sof...Siyabonga Masuku
This document evaluates different authentication mechanisms for control plane applications in software defined networks (SDNs). It discusses how a lack of trust between network applications and controllers is a challenge for SDN security. The document reviews several proposed authentication mechanisms and outlines a research proposal to evaluate these mechanisms under different performance scenarios in order to recommend a well-performing and reliable standard for the northbound API. It will use literature reviews and simulations with Mininet and NS3 to test authentication mechanisms on factors like the number of authenticated/unauthenticated applications over time.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Final year IEEE 2016-2017 PROJECTS TITLES (IEEE 2016 papers) For ME,M.Tech,BE...S3 Infotech IEEE Projects
This document provides descriptions for 10 IEEE projects related to cloud computing and data security. It includes summaries of projects focused on: 1) Dynamic and public auditing for cloud data with fair arbitration. 2) Enabling cloud storage auditing while outsourcing key updates. 3) Providing user security guarantees in public cloud infrastructures.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected servers instead of local servers. Cloud can be used to store data, share resources and also to provide services. Technically, there is very little difference between public and private cloud architecture. However, the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security in clouds. Several techniques implementing attribute based encryption for fine grained access control have been proposed. Under such approaches, the key management overhead is a little bit high in terms of computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and reduces the overhead of the key management as well as secret sharing by using efficient algorithms and protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption environment.
Bluedog white paper - Our WebObjects Web Security Modeltom termini
At Bluedog, our seminal product, Workbench “Always on the Job!” social collaboration SAAS platform is secured the way we have architected all our three-tier Java-based web applications. We secure the application with input validation, a core authentication authorization framework based on LDAP and JINDI, configuration management that ensures testing for vulnerabilities, and strong use of cryptography. In addition, we utilize session management, exception control, auditing and logging to ensure security of the app and web services.
We also secure our routers and other aspects of the network as well as securing the host servers (patching, account management, directory access, and port monitoring). Most importantly, we design our WebObject web applications securely from the get-go.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
This document proposes a cloud-based access control model for selectively encrypting documents with traitor detection. It aims to address the high computational overhead of key management and secret sharing in existing attribute-based encryption approaches for cloud data security. The proposed model uses efficient algorithms and protocols like aggregate equality oblivious commitment-based envelope protocol and fast access control vector broadcast group key management to reduce overhead. It also introduces a traitor tracing technique to identify any traitors in the two-layer encryption environment for cloud computing.
In an organization specifically as virtual as cloud there is need for access control systems to constrain
users direct or backhanded action that could lead to breach of security. In cloud, apart from owner access
to confidential data the third party auditing and accounting is done which could stir up further data leaks.
To control such data leaks and integrity, in past several security policies based on role, identity and user
attributes were proposed and found ineffective since they depend on static policies which do not monitor
data access and its origin. Provenance on the other hand tracks data usage and its origin which proves the
authenticity of data. To employ provenance in a real time system like cloud, the service provider needs to
store metadata on the subject of data alteration which is universally called as the Provenance Information.
This paper presents a provenance-policy based access control model which is designed and integrated with
the system that not only makes data auditable but also incorporates accountability for data alteration
events.
.Net projects 2011 by core ieeeprojects.com msudan92
The document contains summaries of 15 IEEE projects from 2011. Each project summary is 1-3 sentences describing the high level goal or problem addressed by the project. For example, one project proposes a policy enforcing mechanism to ensure fair communication in mobile ad hoc networks by regulating applications through proper communication policies. Another project presents a query formulation language called MashQL to easily query and fuse structured data from multiple sources on the web.
Evaluation of Authentication Mechanisms in Control Plane Applications for Sof...Siyabonga Masuku
This document evaluates different authentication mechanisms for control plane applications in software defined networks (SDNs). It discusses how a lack of trust between network applications and controllers is a challenge for SDN security. The document reviews several proposed authentication mechanisms and outlines a research proposal to evaluate these mechanisms under different performance scenarios in order to recommend a well-performing and reliable standard for the northbound API. It will use literature reviews and simulations with Mininet and NS3 to test authentication mechanisms on factors like the number of authenticated/unauthenticated applications over time.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Final year IEEE 2016-2017 PROJECTS TITLES (IEEE 2016 papers) For ME,M.Tech,BE...S3 Infotech IEEE Projects
This document provides descriptions for 10 IEEE projects related to cloud computing and data security. It includes summaries of projects focused on: 1) Dynamic and public auditing for cloud data with fair arbitration. 2) Enabling cloud storage auditing while outsourcing key updates. 3) Providing user security guarantees in public cloud infrastructures.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected servers instead of local servers. Cloud can be used to store data, share resources and also to provide services. Technically, there is very little difference between public and private cloud architecture. However, the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security in clouds. Several techniques implementing attribute based encryption for fine grained access control have been proposed. Under such approaches, the key management overhead is a little bit high in terms of computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and reduces the overhead of the key management as well as secret sharing by using efficient algorithms and protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption environment.
Bluedog white paper - Our WebObjects Web Security Modeltom termini
At Bluedog, our seminal product, Workbench “Always on the Job!” social collaboration SAAS platform is secured the way we have architected all our three-tier Java-based web applications. We secure the application with input validation, a core authentication authorization framework based on LDAP and JINDI, configuration management that ensures testing for vulnerabilities, and strong use of cryptography. In addition, we utilize session management, exception control, auditing and logging to ensure security of the app and web services.
We also secure our routers and other aspects of the network as well as securing the host servers (patching, account management, directory access, and port monitoring). Most importantly, we design our WebObject web applications securely from the get-go.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
This document proposes a cloud-based access control model for selectively encrypting documents with traitor detection. It aims to address the high computational overhead of key management and secret sharing in existing attribute-based encryption approaches for cloud data security. The proposed model uses efficient algorithms and protocols like aggregate equality oblivious commitment-based envelope protocol and fast access control vector broadcast group key management to reduce overhead. It also introduces a traitor tracing technique to identify any traitors in the two-layer encryption environment for cloud computing.
Blockchain-Based Secure and Scalable Routing Mechanisms for VANETs ApplicationsIJCNCJournal
The VANET has seen a boom in the distribution of significant source data,enabling connected vehicle communications to enhance roadway safety.Despite the potential for interesting applications invehicle networks,thereare still unresolved issues that have the potential to hinder bandwidth utilization once deployed. Specifically, insider assaults on VANET platforms such as Blackhole attemptscan completely stop vehicle-to-vehicle communications and impair the networks' performance level. In this study, we provide the blockchain-based decentralized trust scoring architecture for the participants in the network to identify existing and blacklisted insider adversaries in VANET. To address this concern, we suggest a two-level detection technique, in the first level neighboring nodes determine theirtrustworthiness and in the second level it aggregates trust scores for vehicle nodes using a consortium blockchain-based mechanism that uses authorized Road Side Units (RSUs) as consensus mechanism. The blacklisted node records are then periodically changed based on the trust scores supplied by the nearby nodes. In regards to the practical scope of the network, the experimental study demonstrates that the suggested solution is effective and sustainable. To improve packet delivery ratio and vehicle node security in the VANET, the blockchain-based Trust-LEACH routing technique has also been created. The performance analysis has been carried out for Computational cost analysis, Computational time for block creation, Network analysis, SecurityAnalysis, and MITM attack analysis. Additionally, we provide proof that the suggested approach enhances VANET reliability by thwarting and removing insider threat initiation nodes from its blacklist.
BLOCKCHAIN-BASED SECURE AND SCALABLE ROUTING MECHANISMS FOR VANETS APPLICATIONSIJCNCJournal
The VANET has seen a boom in the distribution of significant source data,enabling connected vehicle
communications to enhance roadway safety.Despite the potential for interesting applications invehicle
networks,thereare still unresolved issues that have the potential to hinder bandwidth utilization once
deployed. Specifically, insider assaults on VANET platforms such as Blackhole attemptscan completely
stop vehicle-to-vehicle communications and impair the networks' performance level. In this study, we
provide the blockchain-based decentralized trust scoring architecture for the participants in the network to
identify existing and blacklisted insider adversaries in VANET. To address this concern, we suggest a twolevel detection technique, in the first level neighboring nodes determine theirtrustworthiness and in the
second level it aggregates trust scores for vehicle nodes using a consortium blockchain-based mechanism
that uses authorized Road Side Units (RSUs) as consensus mechanism. The blacklisted node records are
then periodically changed based on the trust scores supplied by the nearby nodes. In regards to the
practical scope of the network, the experimental study demonstrates that the suggested solution is effective
and sustainable. To improve packet delivery ratio and vehicle node security in the VANET, the blockchainbased Trust-LEACH routing technique has also been created. The performance analysis has been carried
out for Computational cost analysis, Computational time for block creation, Network analysis,
SecurityAnalysis, and MITM attack analysis. Additionally, we provide proof that the suggested approach
enhances VANET reliability by thwarting and removing insider threat initiation nodes from its blacklist.
International Journal on Web Service Computing (IJWSC)ijwscjournal
Web Service is a reusable component which has set of related functionalities that service requesters can
programmatically access from the service provider and manipulate through the Web. One of the main
security issue is to secure web services from the malicious requesters. Since trust plays an important role in
many kinds of human communication, it allows people to work under insecurity and with the risk of
negative cost, many researchers have proposed different trust based web services access control model to
prevent malicious requesters. In this literature review, various existing trust based web services access
control model have been studied also investigated how the concept of a trust level is used in the access
control policy of a service provider to allow service requester to access the web services
A Literature Review on Trust Management in Web Services Access Controlijwscjournal
This document discusses trust-based access control models for web services. It provides an overview of web services and security issues, then reviews existing access control models including role-based access control and attribute-based access control. It also discusses concepts of trust management and how trust is used in various trust-based web services access control models to determine whether to grant access to requesters based on their trust level. Finally, it examines how trust levels are calculated and how policies are represented in these trust-based models.
A Literature Review on Trust Management in Web Services Access Controlijwscjournal
Web Service is a reusable component which has set of related functionalities that service requesters can programmatically access from the service provider and manipulate through the Web. One of the main security issue is to secure web services from the malicious requesters. Since trust plays an important role in many kinds of human communication, it allows people to work under insecurity and with the risk of negative cost, many researchers have proposed different trust based web services access control model to prevent malicious requesters. In this literature review, various existing trust based web services access control model have been studied also investigated how the concept of a trust level is used in the access control policy of a service provider to allow service requester to access the web services.
This document summarizes research on techniques for improving database performance under heavy query loads. It discusses using fuzzy logic to classify queries by priority and batch similar queries to execute together. The document reviews related work on detecting misuse and insider threats, collaborative querying, risk-adaptive access control models, query recommendation, and ranking query results. The proposed approach uses fuzzy logic to identify high priority queries in a priority queue based on extracted features. Queries are clustered and batched by type to commit together, aiming to yield the best database performance even under high loads.
A Privacy-Aware Tracking and Tracing SystemIJCNCJournal
This document summarizes a research paper that proposes a privacy-aware tracking and tracing system. The system uses attribute-based encryption and zero-knowledge proofs to allow users to control access to their identity and location data stored on a cloud server. Specifically, zero-knowledge proofs provide anonymity during authentication to resist tracking attacks, while homomorphic encryption allows the server to perform distance comparisons on encrypted location data without decrypting it. The paper presents the system model and threat model, provides background on the cryptographic techniques used, and describes the proposed scheme in detail. Formal security analysis and verification show that the scheme is secure against chosen plaintext attacks and provides efficient computation for practical applications.
A Privacy-Aware Tracking and Tracing SystemIJCNCJournal
The ability to track and trace assets in the supply chain is becoming increasingly important. In addition to asset tracking, the technologies used provide new opportunities for collecting and analyzing employee position and biometric data. As a result, these technologies can be used to monitor performance or track worker behavior, resulting in additional risks and stress for employees. Furthermore, contact tracing systems used to contain the COVID-19 outbreak have made positive patients' privacy public, resulting in violations of users' rights and even endangering their lives. To resolve this situation, a verifiable attribute-based encryption (ABE) scheme based on homomorphic encryption and zero-knowledge identification (ZKI) is proposed, with ZKI providing anonymity for data owners to resist tracking attacks and homomorphic encryption used to solve the problem of privacy leakage from location inquiries returned from a semi-honest server. Finally, theoretical security analysis and formal security verification show that our scheme is secure against the chosen plaintext attack (CPA) and other attacks. Besides that, our novel scheme is efficient enough in terms of user-side computation overhead for practical applications.
Intrusion Detection and Marking Transactions in a Cloud of Databases Environm...neirew J
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
INTRUSION DETECTION AND MARKING TRANSACTIONS IN A CLOUD OF DATABASES ENVIRONMENTijccsa
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
A Modified Things Role Based Access Control Model For Securing Utilities In C...James Heller
This document proposes a modified role-based access control (RBAC) model called Things RBAC (T-RBAC) to improve security in cloud computing environments. T-RBAC extends traditional RBAC to also consider "things" or networked objects in an internet of things. In T-RBAC, object owners are alerted before permissions are granted to access or modify an object. The document reviews challenges of cloud computing, common security issues, and existing access control models like discretionary access control and mandatory access control. It then discusses RBAC in more detail and proposes the modified T-RBAC model to address cloud computing security needs.
With the recent increase demand for cloud services, handling clients’ needs is getting increasingly
challenging. Responding to all requesting clients with no exception could lead to security breaches, and
since it is the provider’s responsibility to secure not only the offered cloud services but also the data, it is
important to ensure clients reliability. Although filtering clients in the cloud is not so common, it is
required to assure cloud safety.
In this paper, we made use of multi agent system interaction mechanisms to handle the cloud interactions
for the providers’ side, and of Particle Swarm Optimization to select the and determine the accurate trust
weight and therefore to filtrate the clients by determining malicious and untrustworthy clients. The
selection depends on previous knowledge and overall rating of trusted peers. The conducted experiments
proves that the combination between MAS, PSO and trust outputs relevant results with different number of
peers, the solution is able to converge to the best result after small number of iterations. To explore more
space and scenarios, synthetic data was generated to improve the model’s precision .
The document proposes a model that uses multi-agent systems and particle swarm optimization to select trustworthy cloud clients. Agents represent cloud providers and clients and interact to determine clients' trustworthiness based on previous feedback. The model converges to identify malicious clients after a small number of iterations. Experiments showed the combination of these techniques produced relevant results for different numbers of peers.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT Privacy preserving public auditing fo...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE CLOUDCOMPUTING PROJECT Privacy preserving public auditing for ...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Automatic Management of Wireless Sensor Networks through Cloud Computingyousef emami
This document presents a framework for integrating wireless sensor networks (WSNs) with cloud computing. The framework uses policy-based network management to automate WSN management tasks. It proposes adding a policy analyzer to a publish/subscribe broker to match sensor data with stored policies and mandate appropriate actions. The framework includes software as a service, virtualization management, a publish/subscribe broker, management services like SLA and change management, fault tolerance, and security measures. The goal is to facilitate and automate WSN management through the use of cloud computing and policy-based network rules.
Applications Drive Secure Lightpath Creation Across Heterogeneous DomainsTal Lavian Ph.D.
We realize an open, programmable paradigm for application-driven network control by way of a novel network plane — the “service plane” — layered above legacy networks. The service plane bridges domains, establishes trust, and exposes control to credited users/applications while preventing unauthorized access and resource theft. The Authentication, Authorization, Accounting subsystem and the Dynamic Resource Allocation Controller are the two defining building blocks of our service plane. In concert, they act upon an interconnection request or a restoration request according to application requirements, security credentials, and domain-resident policy. We have experimented with such service
plane in an optical, large-scale testbed featuring two hubs (NetherLight in Amsterdam, StarLight in Chicago) and attached network clouds, each representing an independent domain. The dynamic interconnection of the heterogeneous domains occurred at Layer 1. The interconnections ultimately resulted in an optical end-to-end path (lightpath) for use by the
requesting Grid application.
Insuring Security for Outsourced Data Stored in Cloud EnvironmentEditor IJCATR
The cloud storage offers users with infrastructure flexibility, faster deployment of applications and data, cost
control, adaptation of cloud resources to real needs, improved productivity, etc. Inspite of these advantageous factors, there
are several deterrents to the widespread adoption of cloud computing remain. Among them, security towards the correctness
of the outsourced data and issues of privacy lead a major role. In order to avoid security risk for the outsourced data, we
propose the dynamic audit services that enables integrity verification of untrusted and outsourced storages. An interactive
proof system (IPS) with the zero knowledge property is introduced to provide public auditability without downloading raw
data and protect privacy of the data. In the proposed system data owner stores the large number of data in cloud after e
encrypting the data with private key and also send public key to third party auditor (TPA) for auditing purpose. TPA in
clouds and it’s maintained by CSP. An Authorized Application (AA), which holds a data owners secret key (sk) and
manipulate the outsourced data and update the associated IHT stored in TPA. Finally Cloud users access the services through
the AA. Our system also provides secure auditing while the data owner outsourcing the data in the cloud. And after
performing auditing operations, security solutions are enhanced for the purpose of detecting malicious users with the help of
Certificate Authority
“CONTEXT, CONTENT, PROCESS” APPROACH TO ALIGN INFORMATION SECURITY INVESTMENT...ClaraZara1
Today business environment is highly dependent on complex technologies, and information is considered an important asset. Organizations are therefore required to protect their information infrastructure and follow an inclusive risk management approach. One way to achieve this is by aligning the information security investment decisions with respect to organizational strategy. A large number of information security investment models have are in the literature. These models are useful for optimal and cost-effective investments in information security. However, it is extremely challenging for a decision maker to select one or combination of several models to decide on investments in information security controls. We propose a framework to simplify the task of selecting information security investment model(s). The proposed framework follows the “Context, Content, Process” approach, and this approach is useful in evaluation and prioritization of investments in information security controls in alignment with the overall organizational strategy.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
Mais conteúdo relacionado
Semelhante a Using Lattice To Dynamically Prevent Information Leakage For Web Services
Blockchain-Based Secure and Scalable Routing Mechanisms for VANETs ApplicationsIJCNCJournal
The VANET has seen a boom in the distribution of significant source data,enabling connected vehicle communications to enhance roadway safety.Despite the potential for interesting applications invehicle networks,thereare still unresolved issues that have the potential to hinder bandwidth utilization once deployed. Specifically, insider assaults on VANET platforms such as Blackhole attemptscan completely stop vehicle-to-vehicle communications and impair the networks' performance level. In this study, we provide the blockchain-based decentralized trust scoring architecture for the participants in the network to identify existing and blacklisted insider adversaries in VANET. To address this concern, we suggest a two-level detection technique, in the first level neighboring nodes determine theirtrustworthiness and in the second level it aggregates trust scores for vehicle nodes using a consortium blockchain-based mechanism that uses authorized Road Side Units (RSUs) as consensus mechanism. The blacklisted node records are then periodically changed based on the trust scores supplied by the nearby nodes. In regards to the practical scope of the network, the experimental study demonstrates that the suggested solution is effective and sustainable. To improve packet delivery ratio and vehicle node security in the VANET, the blockchain-based Trust-LEACH routing technique has also been created. The performance analysis has been carried out for Computational cost analysis, Computational time for block creation, Network analysis, SecurityAnalysis, and MITM attack analysis. Additionally, we provide proof that the suggested approach enhances VANET reliability by thwarting and removing insider threat initiation nodes from its blacklist.
BLOCKCHAIN-BASED SECURE AND SCALABLE ROUTING MECHANISMS FOR VANETS APPLICATIONSIJCNCJournal
The VANET has seen a boom in the distribution of significant source data,enabling connected vehicle
communications to enhance roadway safety.Despite the potential for interesting applications invehicle
networks,thereare still unresolved issues that have the potential to hinder bandwidth utilization once
deployed. Specifically, insider assaults on VANET platforms such as Blackhole attemptscan completely
stop vehicle-to-vehicle communications and impair the networks' performance level. In this study, we
provide the blockchain-based decentralized trust scoring architecture for the participants in the network to
identify existing and blacklisted insider adversaries in VANET. To address this concern, we suggest a twolevel detection technique, in the first level neighboring nodes determine theirtrustworthiness and in the
second level it aggregates trust scores for vehicle nodes using a consortium blockchain-based mechanism
that uses authorized Road Side Units (RSUs) as consensus mechanism. The blacklisted node records are
then periodically changed based on the trust scores supplied by the nearby nodes. In regards to the
practical scope of the network, the experimental study demonstrates that the suggested solution is effective
and sustainable. To improve packet delivery ratio and vehicle node security in the VANET, the blockchainbased Trust-LEACH routing technique has also been created. The performance analysis has been carried
out for Computational cost analysis, Computational time for block creation, Network analysis,
SecurityAnalysis, and MITM attack analysis. Additionally, we provide proof that the suggested approach
enhances VANET reliability by thwarting and removing insider threat initiation nodes from its blacklist.
International Journal on Web Service Computing (IJWSC)ijwscjournal
Web Service is a reusable component which has set of related functionalities that service requesters can
programmatically access from the service provider and manipulate through the Web. One of the main
security issue is to secure web services from the malicious requesters. Since trust plays an important role in
many kinds of human communication, it allows people to work under insecurity and with the risk of
negative cost, many researchers have proposed different trust based web services access control model to
prevent malicious requesters. In this literature review, various existing trust based web services access
control model have been studied also investigated how the concept of a trust level is used in the access
control policy of a service provider to allow service requester to access the web services
A Literature Review on Trust Management in Web Services Access Controlijwscjournal
This document discusses trust-based access control models for web services. It provides an overview of web services and security issues, then reviews existing access control models including role-based access control and attribute-based access control. It also discusses concepts of trust management and how trust is used in various trust-based web services access control models to determine whether to grant access to requesters based on their trust level. Finally, it examines how trust levels are calculated and how policies are represented in these trust-based models.
A Literature Review on Trust Management in Web Services Access Controlijwscjournal
Web Service is a reusable component which has set of related functionalities that service requesters can programmatically access from the service provider and manipulate through the Web. One of the main security issue is to secure web services from the malicious requesters. Since trust plays an important role in many kinds of human communication, it allows people to work under insecurity and with the risk of negative cost, many researchers have proposed different trust based web services access control model to prevent malicious requesters. In this literature review, various existing trust based web services access control model have been studied also investigated how the concept of a trust level is used in the access control policy of a service provider to allow service requester to access the web services.
This document summarizes research on techniques for improving database performance under heavy query loads. It discusses using fuzzy logic to classify queries by priority and batch similar queries to execute together. The document reviews related work on detecting misuse and insider threats, collaborative querying, risk-adaptive access control models, query recommendation, and ranking query results. The proposed approach uses fuzzy logic to identify high priority queries in a priority queue based on extracted features. Queries are clustered and batched by type to commit together, aiming to yield the best database performance even under high loads.
A Privacy-Aware Tracking and Tracing SystemIJCNCJournal
This document summarizes a research paper that proposes a privacy-aware tracking and tracing system. The system uses attribute-based encryption and zero-knowledge proofs to allow users to control access to their identity and location data stored on a cloud server. Specifically, zero-knowledge proofs provide anonymity during authentication to resist tracking attacks, while homomorphic encryption allows the server to perform distance comparisons on encrypted location data without decrypting it. The paper presents the system model and threat model, provides background on the cryptographic techniques used, and describes the proposed scheme in detail. Formal security analysis and verification show that the scheme is secure against chosen plaintext attacks and provides efficient computation for practical applications.
A Privacy-Aware Tracking and Tracing SystemIJCNCJournal
The ability to track and trace assets in the supply chain is becoming increasingly important. In addition to asset tracking, the technologies used provide new opportunities for collecting and analyzing employee position and biometric data. As a result, these technologies can be used to monitor performance or track worker behavior, resulting in additional risks and stress for employees. Furthermore, contact tracing systems used to contain the COVID-19 outbreak have made positive patients' privacy public, resulting in violations of users' rights and even endangering their lives. To resolve this situation, a verifiable attribute-based encryption (ABE) scheme based on homomorphic encryption and zero-knowledge identification (ZKI) is proposed, with ZKI providing anonymity for data owners to resist tracking attacks and homomorphic encryption used to solve the problem of privacy leakage from location inquiries returned from a semi-honest server. Finally, theoretical security analysis and formal security verification show that our scheme is secure against the chosen plaintext attack (CPA) and other attacks. Besides that, our novel scheme is efficient enough in terms of user-side computation overhead for practical applications.
Intrusion Detection and Marking Transactions in a Cloud of Databases Environm...neirew J
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
INTRUSION DETECTION AND MARKING TRANSACTIONS IN A CLOUD OF DATABASES ENVIRONMENTijccsa
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
A Modified Things Role Based Access Control Model For Securing Utilities In C...James Heller
This document proposes a modified role-based access control (RBAC) model called Things RBAC (T-RBAC) to improve security in cloud computing environments. T-RBAC extends traditional RBAC to also consider "things" or networked objects in an internet of things. In T-RBAC, object owners are alerted before permissions are granted to access or modify an object. The document reviews challenges of cloud computing, common security issues, and existing access control models like discretionary access control and mandatory access control. It then discusses RBAC in more detail and proposes the modified T-RBAC model to address cloud computing security needs.
With the recent increase demand for cloud services, handling clients’ needs is getting increasingly
challenging. Responding to all requesting clients with no exception could lead to security breaches, and
since it is the provider’s responsibility to secure not only the offered cloud services but also the data, it is
important to ensure clients reliability. Although filtering clients in the cloud is not so common, it is
required to assure cloud safety.
In this paper, we made use of multi agent system interaction mechanisms to handle the cloud interactions
for the providers’ side, and of Particle Swarm Optimization to select the and determine the accurate trust
weight and therefore to filtrate the clients by determining malicious and untrustworthy clients. The
selection depends on previous knowledge and overall rating of trusted peers. The conducted experiments
proves that the combination between MAS, PSO and trust outputs relevant results with different number of
peers, the solution is able to converge to the best result after small number of iterations. To explore more
space and scenarios, synthetic data was generated to improve the model’s precision .
The document proposes a model that uses multi-agent systems and particle swarm optimization to select trustworthy cloud clients. Agents represent cloud providers and clients and interact to determine clients' trustworthiness based on previous feedback. The model converges to identify malicious clients after a small number of iterations. Experiments showed the combination of these techniques produced relevant results for different numbers of peers.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT Privacy preserving public auditing fo...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE CLOUDCOMPUTING PROJECT Privacy preserving public auditing for ...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Automatic Management of Wireless Sensor Networks through Cloud Computingyousef emami
This document presents a framework for integrating wireless sensor networks (WSNs) with cloud computing. The framework uses policy-based network management to automate WSN management tasks. It proposes adding a policy analyzer to a publish/subscribe broker to match sensor data with stored policies and mandate appropriate actions. The framework includes software as a service, virtualization management, a publish/subscribe broker, management services like SLA and change management, fault tolerance, and security measures. The goal is to facilitate and automate WSN management through the use of cloud computing and policy-based network rules.
Applications Drive Secure Lightpath Creation Across Heterogeneous DomainsTal Lavian Ph.D.
We realize an open, programmable paradigm for application-driven network control by way of a novel network plane — the “service plane” — layered above legacy networks. The service plane bridges domains, establishes trust, and exposes control to credited users/applications while preventing unauthorized access and resource theft. The Authentication, Authorization, Accounting subsystem and the Dynamic Resource Allocation Controller are the two defining building blocks of our service plane. In concert, they act upon an interconnection request or a restoration request according to application requirements, security credentials, and domain-resident policy. We have experimented with such service
plane in an optical, large-scale testbed featuring two hubs (NetherLight in Amsterdam, StarLight in Chicago) and attached network clouds, each representing an independent domain. The dynamic interconnection of the heterogeneous domains occurred at Layer 1. The interconnections ultimately resulted in an optical end-to-end path (lightpath) for use by the
requesting Grid application.
Insuring Security for Outsourced Data Stored in Cloud EnvironmentEditor IJCATR
The cloud storage offers users with infrastructure flexibility, faster deployment of applications and data, cost
control, adaptation of cloud resources to real needs, improved productivity, etc. Inspite of these advantageous factors, there
are several deterrents to the widespread adoption of cloud computing remain. Among them, security towards the correctness
of the outsourced data and issues of privacy lead a major role. In order to avoid security risk for the outsourced data, we
propose the dynamic audit services that enables integrity verification of untrusted and outsourced storages. An interactive
proof system (IPS) with the zero knowledge property is introduced to provide public auditability without downloading raw
data and protect privacy of the data. In the proposed system data owner stores the large number of data in cloud after e
encrypting the data with private key and also send public key to third party auditor (TPA) for auditing purpose. TPA in
clouds and it’s maintained by CSP. An Authorized Application (AA), which holds a data owners secret key (sk) and
manipulate the outsourced data and update the associated IHT stored in TPA. Finally Cloud users access the services through
the AA. Our system also provides secure auditing while the data owner outsourcing the data in the cloud. And after
performing auditing operations, security solutions are enhanced for the purpose of detecting malicious users with the help of
Certificate Authority
Semelhante a Using Lattice To Dynamically Prevent Information Leakage For Web Services (20)
“CONTEXT, CONTENT, PROCESS” APPROACH TO ALIGN INFORMATION SECURITY INVESTMENT...ClaraZara1
Today business environment is highly dependent on complex technologies, and information is considered an important asset. Organizations are therefore required to protect their information infrastructure and follow an inclusive risk management approach. One way to achieve this is by aligning the information security investment decisions with respect to organizational strategy. A large number of information security investment models have are in the literature. These models are useful for optimal and cost-effective investments in information security. However, it is extremely challenging for a decision maker to select one or combination of several models to decide on investments in information security controls. We propose a framework to simplify the task of selecting information security investment model(s). The proposed framework follows the “Context, Content, Process” approach, and this approach is useful in evaluation and prioritization of investments in information security controls in alignment with the overall organizational strategy.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
Analysis of Media Discourse on Intellectual Property Rights Related to Metave...ClaraZara1
This study examines the evolving discourse on intellectual property rights (IPR) within the metaverse's Korean context, facilitated by the BIGKinds analytics program. As the metaverse transitions from a nascent concept to a complex reality, it reshapes digital interactions and poses new challenges for IPR, necessitating a comprehensive investigation into societal interest and legal discourse. The findings reveal a significant surge in metaverse-related IPR engagement over the past three years, aligning with the broader digital shift amid the COVID-19 pandemic. This pattern suggests an increasing need for nuanced legal approaches to copyright, trademark, and design patent protection in virtual environments. Thus, the urgency for legal reform to accommodate the metaverse's unique characteristics, the necessity for international collaboration on IPR in a borderless digital domain, and the intersection of technological advances, like NFTs and blockchain, with legal frameworks impacting creators' rights. As a result, this study provides policymakers and the digital community with real-time guidance on protecting intellectual property amid the transformative growth of the metaverse.
International Conference on Information Technology Convergence Services & AI ...ClaraZara1
International Conference on Information Technology Convergence Services & AI (ITCAI 2024) will provide an excellent international forum for sharing knowledge and new research results in all areas of Information Technology Convergence Services & AI. The Conference focuses on all technical and practical aspects of ITC & AI. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Information Technology Convergence and services, AI, and establishing new collaborations in these areas.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
10th International Conference on Artificial Intelligence and Soft Computing (...ClaraZara1
10th International Conference on Artificial Intelligence and Soft Computing (AIS 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology, and applications of Artificial Intelligence, Soft Computing. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
A New Framework for Securing Personal Data Using the Multi-CloudClaraZara1
Relaying On A Single Cloud As A Storage Service Is Not A Proper Solution For A Number Of Reasons; For Instance, The Data Could Be Captured While Uploaded To The Cloud, And The Data Could Be Stolen From The Cloud Using A Stolen Id. In This Paper, We Propose A Solution That Aims At Offering A Secure Data Storage For Mobile Cloud Computing Based On The Multi-Clouds Scheme. The Proposed Solution Will Take The Advantages Of Multi-Clouds, Data Cryptography, And Data Compression To Secure The Distributed Data; By Splitting The Data Into Segments, Encrypting The Segments, Compressing The Segments, Distributing The Segments Via Multi-Clouds While Keeping One Segment On The Mobile Device Memory; Which Will Prevent Extracting The Data If The Distributed Segments Have Been Intercepted.
3rd International Conference on Artificial Intelligence Advances (AIAD 2024)ClaraZara1
3rd International Conference on Artificial Intelligence Advances (AIAD 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the area advanced Artificial Intelligence. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the research area. Core areas of AI and advanced multi-disciplinary and its applications will be covered during the conferences.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
10th International Conference on Software Engineering (SOFT 2024)ClaraZara1
10th International Conference on Software Engineering (SOFT 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering and Applications. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
A New Framework for Securing Personal Data Using the Multi-CloudClaraZara1
Relaying On A Single Cloud As A Storage Service Is Not A Proper Solution For A Number Of Reasons; For Instance, The Data Could Be Captured While Uploaded To The Cloud, And The Data Could Be Stolen From The Cloud Using A Stolen Id. In This Paper, We Propose A Solution That Aims At Offering A Secure Data Storage For Mobile Cloud Computing Based On The Multi-Clouds Scheme. The Proposed Solution Will Take The Advantages Of Multi-Clouds, Data Cryptography, And Data Compression To Secure The Distributed Data; By Splitting The Data Into Segments, Encrypting The Segments, Compressing The Segments, Distributing The Segments Via Multi-Clouds While Keeping One Segment On The Mobile Device Memory; Which Will Prevent Extracting The Data If The Distributed Segments Have Been Intercepted.
8th International Conference on Networks and Security (NSEC 2024)ClaraZara1
8th International Conference on Networks and Security (NSEC 2024) ) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Networks and Communications security. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Security, Privacy and Trust Management (IJSPTM)ClaraZara1
With the simplicity of transmission of data over the web increasing, there has more prominent need for adequate security mechanisms. Trust management is essential to the security framework of any network. In most traditional networks both wired and wireless centralized entities play pivotal roles in trust management. The International Journal of Security, Privacy and Trust Management ( IJSPTM ) is an open access peer reviewed journal that provides a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.
10th International Conference on Artificial Intelligence and Soft Computing (...ClaraZara1
10th International Conference on Artificial Intelligence and Soft Computing (AIS 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology, and applications of Artificial Intelligence, Soft Computing. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
DOCUMENT SELECTION USING MAPREDUCE Yenumula B Reddy and Desmond HillClaraZara1
Big data is used for structured, unstructured and semi-structured large volume of data which is difficult to manage and costly to store. Using explanatory analysis techniques to understand such raw data, carefully balance the benefits in terms of storage and retrieval techniques is an essential part of the Big Data. The research discusses the MapReduce issues, framework for MapReduce programming model and implementation. The paper includes the analysis of Big Data using MapReduce techniques and identifying a required document from a stream of documents. Identifying a required document is part of the security in a stream of documents in the cyber world. The document may be significant in business, medical, social, or terrorism.
10th International Conference on Data Mining (DaMi 2024)ClaraZara1
10th International Conference on Data Mining (DaMi 2024) Conference provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.
12th International Conference of Artificial Intelligence and Fuzzy Logic (AI ...ClaraZara1
12th International Conference of Artificial Intelligence and Fuzzy Logic (AI & FL 2024) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum. Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to these topics only.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
Using Lattice To Dynamically Prevent Information Leakage For Web Services
1. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
DOI : 10.5121/ijsptm.2016.5102 15
USING LATTICE TO DYNAMICALLY PREVENT
INFORMATION LEAKAGE FOR WEB SERVICES
Shih-Chien Chou
Department of Computer Science and Information Engineering, National Dong Hwa
University, Taiwan
ABSTRACT
The use of web services becomes important. Since web services may be provided by unknown providers,
many researchers designed mechanisms to ensure secure access of web services. Existing mechanisms
generally statically decides whether a requester can invoke a web service but ignore the importance of
dynamically preventing information leakage during web service execution. This paper proposes a lattice-
based information flow control model WSIFC (web service information flow control) to dynamically
prevent information leakage. It uses simple rules to monitor the flows of sensitive information during web
service execution.
KEYWORDS
Web service, information flow control, information leakage prevention
1. INTRODUCTION
Many researchers design mechanisms to ensure secure access of web services. Existing
researches generally statically decide whether a requester can invoke a web service using
mechanisms such as attributes and credentials. In our opinion, the static operation is insufficient.
It is also necessary to dynamically ensure web service security. Dynamically ensuring web service
security prevents information leakage during web service execution. This prevention is important
because a web service may be dishonest or malicious, which may leak sensitive information
received from requesters. Even an honest and friendly web service may also incidentally leak
information during web service execution. For example, suppose a requester sends his credit card
information to an honest web service for a payment. If the web service incidentally displays the
credit card information on the screen and the information is captured by a malicious person,
information leakage occurs.
According to the above description, dynamically preventing information leakage during web
service execution is crucial. The prevention can be achieved by an information flow control
model [1-2], which ensures that every information flow is secure during software execution. An
information flow may occur under the following situation: (a) assign an expression result to a
variable, (b) read an input device, (c) write an output device, and (d) send information to another
site. Our previous work proposed an information flow control model for web services [3], which
is composed of a trust management mechanism and an information flow control model. The
former mechanism statically checks whether a requester can invoke a web service and the latter
model dynamically prevents information leakage during web service execution. Although the
dynamic information flow control model prevents all information leakage, its runtime overhead is
large. To reduce the overhead, we use a far simpler approach to design a new information flow
control model WSIFC (web service information flow control) for web services. WSIFC is a pure
dynamic model. It reuses the trust management mechanism of our previous work to statically
decide whether a requester can invoke a web service.
2. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
16
2. RELATED WORK
XACML [4] is a standard offering mechanisms to describe access policies for web services. The
model proposed in [5] is an implementation of XACML. It allows requesters to use PMI
(privilege management infrastructure) for managing the checking, retrieval, and revocation of
authentication. The model also uses an RBAC-based web service access control policy to
determine whether a requester can invoke a web service. The model proposed in [6] uses X-
GTRBAC [7] to control the access of web services. X-GTRBAC can be used in heterogeneous
and distributed sites. Moreover, it applies TRBAC [8] to control the factor related to time. The
model in [9] determines whether a composition of web services can be invoked. It offers a
language to describe policies, which check whether a composition of web services can be
invoked. The model in [10] uses RBAC (role-based access control) concept [11] to define policies
of accessing a web service. It is a two-leveled mechanism. The first level checks the roles
assigned to requesters and web services. An authorized requester is a candidate to invoke a web
service. The second level uses parameters as service attributes and assigns permissions to the
attributes. An authorized requester can invoke a web service only when it possesses the
permissions to access the attributes.
In addition to the researches discussed above, quite a few researches focus on negotiation among
requesters and web services. In general, negotiation can be regarded as access control. We survey
some of them. The model Trust-Serv [12] uses state machines to dynamically choose web
services at run time. The choosing is a kind of negotiation. It uses trust negotiation [13] to select
web services that can be invoked. The kernel component to determine whether a web service can
be invoked is credential. The model in [14] uses digital credentials for negotiation. It defines
strategies for negotiation policies. The model in [15] handles k-leveled invocation of web
services. The primary component used in the model is credentials. In the survey of [16], the
authors especially emphasize the importance of data privacy in a cloud computing environment (a
cloud computing environment mode can also be applied in web services). They believe that
information flow control is a good solution for the problem. However, the article did not propose
a concrete information flow control model for cloud environments.
According to our survey, existing models generally statically determine whether a requester can
invoke a web service. As to dynamically preventing information leakage during web service
execution, they do nothing.
3. THE MODEL
Our previous work uses two time-consuming rules and a complicated join operator to control
every variable access [3]. This results in large runtime overhead. In our opinion, sending sensitive
information to a web services may be unavoidable. However, requesters may not send too much
sensitive information to web services according to security consideration. For the web services
whose providers are not trusted by a requester, the requester may even send no sensitive
information to the web services. To reduce runtime overhead, our model WSIFC controls only the
access of sensitive variables.
WSIFC is based on lattice. A lattice is a partial ordered graph consisting of a min-node and a
max-node. If a lattice is used to control information flows, the min-node possesses the least
privileges while the max-node possesses the most ones. The traditional lattice-based model uses
the “no read up” and “no write down” rules to control information flows (note that the “can flow”
relationships in [1] inherits the spirit of the two rules). If the security levels of sensitive
information are structured using the lattice model and the two rules are obeyed, no information
leakage will occur. However, the traditional lattice-based information flow control model is a
MAC (mandatory access control), which is criticized as too restricted. To loosen the restriction,
3. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
17
WSIFC only places sensitive variables in a lattice and uses simple rules to replace the restricted
“no read up” and “no write down” rules.
Before formally defining WSIFC, we first describe information leakage in more details.
Generally, information leakage occurs when malicious persons or web services illegally obtain
others’ sensitive information. In general, persons can obtain information from screen or files, and
web services can obtain information from files or other web services (either executing on the
same site or on the other ones). Information leakage may thus happen when: (a) sensitive
information is output to screens or files or (b) sensitive information is transferred to other web
services. In our research, we suppose information transferring on the network is secure and thus
skip the problem of cryptography. We also skip the effect of viruses and worms because they are
out of the scope of our research. Below we turn our focus back to WSIFC.
WSIFC uses security levels to decide whether an instruction may leak sensitive information
during the execution of a web service. It attached a security level to every sensitive variable.
Since a lattice is partial ordered and WSIFC is lattice-based, security levels should also be partial
ordered. To achieve this, a security level in WSIFC is defined as: (Gp, security level number), in
which Gp is a group number and security level number decides the sensitivity of the variable. To
compare or exchange information associated with security levels, the information should be
within the same group, which achieves partial ordering. Partial ordering is necessary when
variables are incomparable. For example, a variable storing an American’s salary with a unit of
USD cannot be compared with one storing a Taiwanese’s salary with a unit of NTD.
According to the above description, the group number in a security level achieves partial ordering
and the security level number decides the sensitivity of a variable. A security level number is
between 0 and n, in which 0 means the less sensitive information and n the most sensitive one.
The number n is decided by the user, in which larger n results in finer-grained control. In addition
to security level, a sensitive variable is also associated with a tag, which is a set. A component in
the set is an IP address plus a port number pair (port numbers represent web services in the IP).
Tag decides whether sending information of a sensitive variable to another web service is secure.
If the IP address plus port number pair of a web service is within a sensitive variable’s tag,
information of the variable can be sent to the web service.
By now we have discussed sensitive variables. As to non-sensitive ones, they possess neither
security levels nor tags. That is, non-sensitive variables are not placed in a lattice. Nevertheless,
the security level and tag of a variable may be adjusted during web service execution, which
implies that a non-sensitive variable may become sensitive and vice versa during runtime. For
example, after executing the statement “a=b;”, the security level and tag of variable a is adjusted
to be the same as those of b. If b is sensitive, a becomes sensitive.
As mentioned before, sensitive information managed by a web service may be leaked only when:
(a) it is output to files or screens or (b) when it is transferred to other web services. The latter case
of leakage can be prevented by tags associated with sensitive variables. To achieve the former
leakage prevention, every file and screen that may receive output information should also be
associated with a security level. Generally, the security level of a screen is decided by its location.
The only possibility of changing a screen’s security level is changing its location. However,
changing screen location is only known by persons and web service will never know that.
Therefore, the security levels of screens will be unchanged during web service execution. As to
files, their security levels also cannot be adjusted because the adjustment will affect the access of
the existing information within the files. For example, if the security level number of a file is 3, it
can be accessed by a user possessing a privilege to access files whose security level number is 3
or lower. If the security level of a file is adjusted to be 4, the user mentioned above can no longer
4. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
18
access the file. Adjusting the security levels of files is thus infeasible. In addition to writing a file,
reading a file may also cause information leakage. For example, if a low sensitive variable intends
to read the contents of a high sensitive file, rules should be defined to prevent information
leakage. We will give information leakage prevention rules for WSIFC after defining the model.
The kernel concepts of WSIFC have been described in details above. Below we give a definition
to WSIFC. To prevent information leakage during the execution of a web service, WSIFC should
be embedded in the web service.
Definition
WSIFC = (SV, SC, SF, SVSL, SCSL, SFSL, SVTAG, MIN, MAX), in which
a. SV is the set of sensitive variables. Initially, every variable in a web service is non-sensitive.
When the web service is invoked and receives sensitive information sent from a requester, the
parameters receiving sensitive information become sensitive. During web service execution,
sensitive parameters may be assigned to other variables, which results in more sensitive
variables that should be monitored.
b. SC is the set of sensitive screens. The security levels of screens are decided by their locations
and will be unchanged during web service execution.
c. SF is the set of sensitive files. The security levels of sensitive files cannot be changed.
d. SVSL is the set of security levels for sensitive variables. A security level is composed of a
group number and a security level number.
e. SCSL is the set of security levels for sensitive screens. The security level of a sensitive screen
cannot be changed during the execution of a web service.
f. SFSL is the set of security levels for sensitive files. The security level of a sensitive file
cannot be changed during the execution of a web service.
g. SVTAG is a set of tags. If the information of a sensitive variable can be transferred to other
web services, the variable should be associated with a tag containing a set of IP address plus
port number pairs.
h. MIN is the min-node of WSIFC.
i. MAX is the max-node of WSIFC.
For simplification purposes, we do not include other I/O devices such as keyboards and mice in
Definition 1. WSIFC uses the same approaches to control information flows to/from I/O devices
other than files and screens. If a device can be read, its control is the same as reading a file. If
information can be output to a device and the information can only be accessed by persons, the
control is the same as writing information to a screen. If information can be output to a device and
the information can be accessed by persons and web services, the control is the same as writing a
file.
(0,0)
(1,0)
(2,0)
(0,1) (1,1) (2,1)
(0,2)
(1,2)
(2,2)
(0,15)
(1,15)
(2,15)
Figure 1. An example lattice of WSIFC
5. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
19
In the lattice-based WSIFC, a node may contain one or more security levels. Moreover, since
variables, files, screens, and other I/O devices may be in the same security level, a security level
in WSIFC may be associated with multiple sensitive items such as variables and files. Figure 1
depicts an example lattice of WSIFC, which only shows the security levels but not the sensitive
items associated with the security levels. In the following paragraphs we will define the rules to
control information flows using WSIFC.
Rule 1. Sensitive variables appear in a derivation statement should be in the same group. Here a
derivation statement is one that will cause information flows such as assignment statement. As to
non-sensitive variables or constants, they can appear in any statement and will not affect the
execution of the statement. Rule 1 is an umbrella rule constraining the others. That is, if Rule 1 is
violated, no derivation statement can be executed
Rule 2. If the variable d_var is derived from variables in the sensitive variable set “{vari| vari is a
sensitive variable and i is between 1 and n}” and other non-sensitive variables, WSIFC allows
d_var to receive the derived information in any case. After the assignment, the security level
number and tag of d_var are respectively changed to ))
(
( )
(var 1
i
SLV
MAX
n
i=
and
i
n
i tag
1
=
I , in which:
a. The function SLV extracts a sensitive variable’s security level number from its security
level.
b. The function MAX extracts from a set of numbers the maximum one.
c. tagi is the tag of the sensitive variable vari. be executed.
Rule 2 may confuse many readers because it completely violates the “no read up” and “no write
down” rules of the traditional lattice-based model. In addition, it controls neither read nor write
access. We explain Rule 2 as follows. Before that, we make a crucial assumption, which is: every
data assigned to a variable is reliable.
Existing information flow control models generally do not allow information with high security
level to be assigned to low security level variable. On the other hand, WSIFC allows every
assignment. To prevent leakage when sensitive information is output, WSIFC changes the
security level and tag of a variable receiving a value as mentioned above. Suppose variable d_var
is assigned a value derived from a sensitive variable set {vari}. Then, the security level number of
d_var will be changed to ))
(
( )
(var 1
i
SLV
MAX
n
i=
. According to the change, the security
level of d_var will be upgraded if the initial SLV(d_var) is smaller
than ))
(
( )
(var 1
i
SLV
MAX
n
i=
. The upgrading prevents the variable d_var from being leaked. On
the other hand, if the initial SLV(d_var) is larger than ))
(
( )
(var 1
i
SLV
MAX
n
i=
, the security
level of d_var will be downgraded. Under this situation, the information obtained by d_var will
also not be leaked. According to the crucial assumption “every data assigned to a variable is
reliable” mentioned above, both read and write access are secure according to Rule 2.
Below we use the statement “a=b+c+d;” to explain Rule 2 (here we bypass the discussion of tags,
which will be discussed in Rule 6). Suppose the initial security levels of the variables a, b, c, d,
are respectively (1,1), (1,2), (1,3), and (1,4). In this case, the security level of the variable a is the
smallest and existing information flow control model will ban the assignment. However, WSIFC
allows the above statement to execute. After the execution, WSIFC changes the security of
6. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
20
variable a to be (1,4). According to the change, the security level of variable a have been
upgraded and therefore no information leakage will occur. On the other hand, if the initial
security level of variable a is (1,5), the security level of variable a will be set (1,4) after the
assignment.
Rule 2 will become non-secure only when the above crucial assumption isn’t fulfilled. In this
case, low security level variables may maliciously corrupt the information carried by high
security level variables (note that even in this case, no information leakage will occur). We cannot
estimate the possibility of the corruption. However, as the assumption is a compromise to reduce
runtime overhead, taking risk is unavoidable. We think that making a compromise to bypass write
check in Rule 2 is acceptable because the compromise may only corrupt but not leak information.
Since the primary objective of WSIFC is preventing information leakage, a compromise that will
not hurt the primary objective is not that serious.
Rule 3. To output the information of a sensitive variable SVAR to a screen SCRN, SLV(SCRN)
should be at least as large as SLV(SVAR). Remember that the function SLV extracts a sensitive
variable’s security level number. This rule avoids high sensitive information to be captured by
persons that can only read low sensitive information from a screen.
Rule 4. If a variable VAR intends to read information from a file FLE, the reading operation is
allowed. After the reading, the security level and tag of VAR should be set the same as those of
the information read from FLE. The spirit of this rule is the same as Rule 2, because reading a
value from a file corresponds to deriving a value from the file and then assigning the value to the
variable.
Rule 5. If a sensitive variable VAR intends to write its information to a file FLE, SLV(FLE)
should be at least as large as SLV(VAR) . Reading and writing a file are totally different. When
reading information from a file to a variable, the information being read can be protected as long
as the security level and tag of the variable are set the same as the information. On the other hand,
since the security level of a file cannot be changed as mentioned before, writing information with
high security level to a low security level file may result in information leakage.
Rule 6. To send the information of a sensitive variable to another web service, the service’s IP
address plus port number pair should be within the tag of the variable. In the sending operation,
the information and the security level of the variable should be sent together so that the web
service receiving the information can protect the information using WSIFC.
4. PROVE OF CORRECTNESS
Information leakage may occur under the following cases: (a) the information of a variable is
output to a screen, (b) the information of a variable is output to a file, and (c) the information of a
variable is sent to web services in other sites. The above cases reveal that information leakage
may occur only when variable contents are output (sending information to other sites can be
considered a special case of output). Below we prove that none of the above mentioned cases will
leak information.
Case 1: Outputting the information of a sensitive variable to a screen will not leak information.
To prove this case, we make the following assumptions: (a) SLV(VAR) of the sensitive variable
VAR is m and (b) VAR is output to a screen SCRN whose SLV(SCRN) is n. If a person intends to
access the screen SCRN, the person should possess a privilege to access screens whose security
level numbers are at least n. Since m
n ≥ according to Rule 3, Case 1 is true.
7. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
21
Case 2: Outputting the information of a sensitive variable to a file will not leak information.
Information output to a screen may only be leaked to persons. However, information output to a
file may be leaked to persons or leaked by web services that read the file. Both the above two
situations should be considered for Case 2. Proving information output to a file will not be leaked
to persons is similar to that in Case 1. We do not duplicate the proof. Below we prove
information output to a file will not be leaked by web services that read the file. Here we only
consider web services that will not transfer information to other sites because the type of web
services will be discussed in Case 3 below.
To prove Case 2, we assume that: (a) SLV(VAR) of the sensitive variable VAR is m, (b) VAR is
output to a file FLE whose SLV(FLE) is n, and (c) WSIFC is embedded in every web service that
will access FLE. The third assumption is crucial because WSIFC cannot control information
flows of a web service not embedding WSIFC. According to Rule 5, m
n ≥ . If a web service
reads VAR from FLE and assigns its information to the variable VAR1, SLV(VAR1) should be set
the value m according to Rule 4. If VAR1 is output to a screen, the proof of Case 1 ensures that
the output information will not be leaked to persons. If VAR1 is output to a file FLE1, the proof
goes back to the beginning of this proof. That is, this proof is endless. Although the proof is
endless, we know that the information of a sensitive variable may be: (a) operated by a web
service, (b) output to a screen, or (c) written to a file. The information operated by a web service
will not be leaked because no output occurs. The information output to a screen will not be leaked
as described above. As to the information output to a file, Rules 3 through 5 ensures that the
information will not be leaked to persons. Since no information will be transferred to other sites
in this case, no information will be leaked to persons corresponds to no information leakage.
According to the above description, Case 2 is true.
Case 3: Sending the information of a sensitive variable to web services in other sites will not leak
information. Suppose the web service WEBSER executing on another site receives the
information of a sensitive variable VAR. To ensure WEBSER will not leak the information it
receives, WSIFC should be embedded in WEBSER. As Rule 6 required, WEBSER is within the
tag of VAR, which means that WEBSER is trusted by VAR. Suppose the parameter PAR in
WEBSER receives the information of VAR. Since receiving the information of an argument by a
parameter can be regarded as an assignment statement, the security level and tag of PAR will be
set the same as those of VAR according to Rule 2. After the argument has been received, WSIFC
will control the information flows of WEBSER during web service execution. According to Cases
1 and 2 above, no information leakage will occur in a normally executing web service that
embeds WSIFC. Therefore, Case 3 is true.
5. PROBLEM DISCUSSION
Many problems should be solved before WSIFC can be widely applied. We discuss some as
follows.
1. WSIFC should be embedded in every web service. To use WSIFC for information leakage
prevention during web service execution, WSIFC should be embedded in every web service. It is
really a problem to require every service provider to embed WSIFC in the web services they
provided. Perhaps someday every requester will understand the importance of preventing
information leakage during web service execution. At that day, a standard model for the
prevention will be defined. When the day comes, we hope that more or less of the concepts in
WSIFC can be included in the standard.
2. The definition of security levels between requesters and that within the web services should be
the same. We use an example to explain this. Suppose a requester thinks that a variable with a
8. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
22
security level number 8 is only accessible by managers or employees with higher grades but a
web service thinks this security level is accessible by vice managers or employees with higher
grades. With the assumptions, the information sent to the web service may be leaked to vice
managers. To solve this problem, a standard seems necessary.
3. Even the above problems have been solved, information leakage may still occur. This problem
may happen in a web service provided by dishonest or malicious service providers. To solve this
problem, we suggest: (a) don’t send highly sensitive information to web services provided by
unfamiliar providers, (b) invoke web services provided by the providers trusted by a requester
whenever possible, and (c) never invoke any web service of a provider if it used to leak
information. The third solution may be the most important because it is a punishment. This
problem reveals that ensuring the security of every service invocation is difficult. Perhaps this is
the most serious problem to solve because requiring every service provider to be honest is almost
impossible.
There are still other problems. We do not intend to discuss more because many problems are not
within the technical area but within the human management area. We think that the problems are
out of the scope of our research.
6. CONCLUSION
Many researchers designed mechanisms to ensure secure access of web services. According to
our survey, existing models generally statically decide whether a requester can invoke a web
service but ignore the importance of dynamically preventing information leakage during web
service execution. This paper proposes an information flow control model WSIFC (web service
information flow control) for the prevention. WSIFC is based on lattice. The traditional lattice-
based information flow control model is a MAC (mandatory access control) and is criticized as
too restricted. To overcome this, WSIFC uses simple rules to replace the “no read up” and “no
write down” rules for MAC. To reduce runtime overhead, WSIFC only monitors the flows of
sensitive information using simple rules.
WSIFC should be embedded in every web service. If a web service does not embed the model,
information leakage may occur when the web service is invoked. Since web services are provided
by providers around the world, we cannot require every provider to embed WSIFC in their web
services. This is a problem almost without solution. Perhaps the software world needs a standard
for embedding an information flow control model in every web service. If the standard is finally
defined, we hope that the important concepts proposed by WSIFC can be used.
REFERENCES
[1] Denning, D. E., 1976. A Lattice Model of Secure Information Flow. Comm. ACM, vol. 19, no. 5,
236-243.
[2] Chou, S. –C., 2004. Embedding Role-Based Access Control Model in Object-Oriented Systems to
Protect Privacy. Journal of Systems and Software, 71(1-2), 143-161
[3] Chou S. –C., and Huang, C. –H., 2010. An Extended XACML Model to Ensure Secure Information
Access for Web Services. Journal of Systems and Software, vol. 83, no. 1., 77-84.
[4] OASIS, 2003. eXtensible Access Control Markup Language (XACML) Version 1.0. OASIS Standard
18.
[5] Shen, H. -B., Hong, F., 2006. An Attribute-Based Access Control Model for Web Services. IEEE
International Conference on Parallel and Distributed Computing Applications and Technologies
(PDCAT'06), 74-79.
[6] Bhatti, R., Bertino, E., Ghafoor, A., 2004. A Trust-based Context-aware Access Control Model for
Web Services. IEEE ICW’04, 184 – 191.
9. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 1, February 2016
23
[7] Bhatti, R., Ghafoor, A., Bertino, E., Joshi, J. B. D., 2005. X-GTRBAC: An XML-Based Policy
Specification Framework and Architecture for Enterprise-Wide Access Control. ACM Transactions
on Information and System Security, Vol. 8, No. 2, 187 – 227.
[8] Bertino, E., Bonatti P. A., Ferrari, E., 2001. TRBAC: A Temporal Role-Based Access Control Model.
ACM Transactions on Information and System Security, Vol. 4, No. 3, 191 – 233.
[9] Seamons, K. E., Winslett, M., Yu, T., 2001. Limiting the Disclosure Access Control Policies during
Automated Trust Negotiation. Network and Distributed System Security Symposium.
[10] Wonohoesodo R., Tari, Z., 2004. A Role Based Access Control for Web Services. Proceedings of the
2004 IEEE International Conference on Service Computing, 49-56.
[11] Sandhu, R. S., Coyne, E. J., Feinstein, H. L., Youman, C. E., 1996. Role-Based Access Control
Models. IEEE Computer, vol. 29, no. 2, 38-47.
[12] Skogsrud, H., Benatallah, B., Casati, F., 2004. Trust-Serv: Model-Rriven Lifecycle Management of
Trust Negotiation Policies for Web Services. International World Wide Web Conference, 53-62.
[13] Yu, T., Winslett, M., Seamons, K., 2003. Supporting Structured Credentials and Sensitive Policies
through Interoperable Strategies for Automated Trust Negotiation. ACM Transactions on Information
and System Security, vol. 6, no. 1, 1-42.
[14] Koshutanski, H., Massacci, F., 2005. Interactive Credential Negotiation for Stateful Business
Processes, Lecture notes in computer science, 256-272.
[15] ecella, M., Ouzzani, M., Paci, F., Bertino, E., 2006. An Access Control Enforcement for
Conversation-based Web Services. International World Wide Web Conference, 257-266.
[16] J. Bacon, D. Eyers, T. F. J. –M. Pasquier, J. Singh, and P. Piezuch, “Information Flow Control for
Secure Cloud Computing”, IEEE Trans. Network and Service Management, 11(1), pp. 76-89, 2014.
AUTHORS
Shih-Chien Chou is currently a professor in the Department of Computer Science and
Information Engineering, National Dong Hwa University, Hualien, Taiwan. His research
interests include software engineering, process environment, software reuse, and
information flow control.