Core IEEE Projects, Core Projects is a division of Conceptz Corporate Training focused to deliver project. We are a diversified team of people working towards a common goal of providing the best project delivery for Final year BE M-TECH, BCA, BSc, MCA, MSc students. or in the industry to our clients. We Work closely with our project to understand their Business Process, give them the Best possible Delivery Model to minimize the Cost and maximize the ROI..
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Be projects 2011 by core ieeeprojects.com
1. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
IEEE 2011 Projects List/2011 IEEE Projects
Title: A Policy Enforcing Mechanism for Trusted Ad Hoc Networks
To ensure fair and secure communication in Mobile Ad hoc Networks (MANETs),
the applications running in these networks must be regulated by proper
communication policies. However, enforcing policies in MANETs is challenging
because they lack the infrastructure and trusted entities encountered in
traditional distributed systems. This paper presents the design and
implementation of a policy enforcing mechanism based on trusted execution
monitor built on top of the Trusted Platform Module. Under this mechanism, each
application or protocol has an associated policy. Two instances of an application
running on different nodes may engage in communicat ion only if these nodes
enforce the same set of policies for both the application and the underlying
protocols used by the application. In this way, nodes can form trusted application
centric networks. Before allowing a node to join such a network, Satem v erifies its
trustworthiness of enforcing the required set of policies. If any of them is
compromised, Satem disconnects the node from the network. We demonstrate the
correctness of our solution through security analysis, and its low overhead
through performance evaluation of the applications.
Title: A Query Formulation Language for the data web
We present a query formulation language called MashQL in order to easily query
and fuse structured data on the web. The main novelty of MashQL is that it allows
people with limited IT-skills to explore and query one or multiple data sources
without prior knowledge about the schema, structure, vocabulary, or any
technical details of these sources. More importantly, to be robust and cover most
cases in practice, we do not assume that a data source should have -an offline or
inline- schema. This poses several language-design and performance complexities
that we fundamentally tackle. To illustrate the query formulation power of
MashQL, and without loss of generality, we chose the Data Web scenario. We also
chose querying RDF, as it is the most primitive data model; hence, MashQL can be
similarly used for querying relational databases and XML. We present two
implementations of MashQL, an online mashup editor, and a Firefox add-on. The
former illustrates how MashQL can be used to query and mash up the Data Web
as simple as filtering and piping web feeds; and the Firefox addon illustrates
using the browser as a web composer rather than only a navigator. To end, we
evaluate MashQL on querying two datasets, DBLP and DBPedia, and show that
our indexing techniques allow instant user -interaction.
2. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: A Privacy-Preserving Remote Data Integrity Checking Protocol with Data Dynamics and Public
Verifiability
To ensure fair and secure communication in Mobile Ad hoc Networks (MANETs),
the applications running in these networks must be regulated by proper
communication policies. However, enforcing policies in MANETs is challenging
because they lack the infrastructure and tr usted entities encountered in
traditional distributed systems. This paper presents the design and
implementation of a policy enforcing mechanism based on trusted execution
monitor built on top of the Trusted Platform Module. Under this mechanism, each
application or protocol has an associated policy. Two instances of an application
running on different nodes may engage in communication only if these nodes
enforce the same set of policies for both the application and the underlying
protocols used by the application. In this way, nodes can form trusted application
centric networks. Before allowing a node to join such a network, Satem verifies its
trustworthiness of enforcing the required set of policies. If any of them is
compromised, Satem disconnects the node from the network. We demonstrate the
correctness of our solution through security analysis, and its low overhead
through performance evaluation of the applications.
Title: Adaptive Fault Tolerant QoS Control Algorithms for Maximizing
Data sensing and retrieval in wireless sensor systems have a widespread
application in areas such as security and surveillance monitoring, and command
and control in battlefields. In query -based wireless sensor systems, a user would
issue a query and expect a response to be returned within the deadline. While the
use of fault tolerance mechanisms through redundancy improves query reliability
in the presence of unreliable wireless communication and sensor faults, it could
cause the energy of the system to be quickly depleted. Therefore, there is an
inherent tradeoff between query reliability vs. energy consumption in query -
based wireless sensor systems. In this paper, we develop adaptive fault tolerant
quality of service (QoS) control algorithms based on hop -by-hop data delivery
utilizing “source” and “path” redundancy, with the goal to satisfy application QoS
requirements while prolonging the lifetime of the sensor system. We develop a
mathematical model for the lifetime of the sensor system as a function of syste m
parameters including the “source” and “path” redundancy levels utilized. We
discover that there exists optimal “source” and “path” redundancy under which
the lifetime of the system is maximized while satisfying application QoS
requirements. Numerical data are presented and validated through extensive
simulation, with physical interpretations given, to demonstrate the feasibility of
our algorithm design.
3. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Adaptive Provisioning of Human Expertise in Service-oriented Systems
Web-based collaborations have become essential in today’s business
environments. Due to the availability of various SOA frameworks, Web services
emerged as the de facto technology to realize flexible compositions of services.
While most existing work focuses on the discovery a nd composition of software
based services, we highlight concepts for a people -centric Web. Knowledge-
intensive environments clearly demand for provisioning of human expertise along
with sharing of computing resources or business data through software -based
services. To address these challenges, we introduce an adaptive approach
allowing humans to provide their expertise through services using SOA standards,
such as WSDL and SOAP. The seamless integration of humans in the SOA loop
triggers numerous social implications, such as evolving expertise and drifting
interests of human service providers. Here we propose a framework that is based
on interaction monitoring techniques enabling adaptations in SOA -based socio-
technical systems.
Title: Automated Certification for Compliant Cloud-based Business Processes
A key problem in the deployment of large -scale, reliable cloud
computing concerns the difficulty to certify the compliance of business processes
operating in the cloud. Standard audit procedures such as SAS -70 and SAS- 117
are hard to conduct for cloud based processes. The paper proposes a novel
approach to certify the compliance of business processes with regulatory
requirements. The approach translates process models into their corresponding
Petri net representations and checks them against requiremen ts also expressed in
this formalism. Being Based on Petri nets, the approach provides well -founded
evidence on adherence and, in case of noncompliance, indicates the possible
vulnerabilities. Keywords: Business process models, Cloud computing, Compliance
certification, Audit, Petri nets.
4. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Data Integrity Proofs in Cloud Storage
Cloud computing has been envisioned as the de -facto solution to the rising
storage costs of IT Enterprises. With the high costs of data storage devices as well
as the rapid rate at which data is being generated it proves costly for enterprises
or individual users to frequently update their hardware. Apart from reduction in
storage costs data outsourcing to the cloud also helps in reducing the
maintenance. Cloud storage moves the user’s data to large data centers, which
are remotely located, on which user does not have any control. However, this
unique feature of the cloud poses many new security challenges which need to be
clearly understood and resolved. We provide a sc heme which gives a proof of data
integrity in the cloud which the customer can employ to check the correctness of
his data in the cloud. This proof can be agreed upon by both the cloud and the
customer and can be incorporated in the Service level agreement (SLA).
Title: Data Leakage Detection
A data distributor has given sensitive data to a set of supposedly trusted agents
(third parties). Some of the data is leaked and found in an unauthorized place
(e.g., on the web or somebody’s laptop). The distributor must assess the likelihood
that the leaked data came from one or more agents, as opposed to having been
independently gathered by other means. We propose data allocation strategies
(across the agents) that improve the probability of identifying leakages. These
methods do not rely on alterations of the released data (e.g., watermarks). In
some cases we can also inject “realistic but fake” data records to further improve
our chances of detecting leakage and identifying the guilty party .
Title: Efficient Computation of Range Aggregates Against Uncertain Location Based Queries
In many applications, including location based services, queries may not be
precise. In this paper, we study the problem of efficiently computing range
aggregates in a multidimensional space when the query location is uncertain.
Specifically, for a query point Q whose location is uncertain and a set S of points
in a multi-dimensional space, we want to calculate the aggregate (e.g., count,
average and sum) over the subset SI of S, Q has at least probability θ within the
distance γ to p. We propose novel, efficient techniques to solve the problem
following the filtering-and-verification paradigm. In particular, two novel
filtering techniques are proposed to effectively and ef ficiently remove data points
from verification. Our comprehensive experiments based on both real and
synthetic data demonstrate the efficiency and scalability of our techniques.
5. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Enabling Public Auditability and Data Dynamics For Storage Security in Cloud Computing
Cloud Computing has been envisioned as the next -generation architecture of IT
Enterprise. It moves the application software and databases to the centralized
large data centers, where the management of the data and services may not be
fully trustworthy. This unique paradigm brings about many new security
challenges, which have not been well understood. This work studies the problem
of ensuring the integrity of data storage in Cloud Computing. In particular, we
consider the task of allowing a third party auditor (TPA), on behalf of the cloud
client, to verify the integrity of the dynamic data stored in the cloud. The
introduction of TPA eliminates the involvement of the client through the auditing
of whether his data stored in the cloud i s indeed intact. The support for data
dynamics via the most general forms of data operation, such as block
modification, insertion and deletion, is also a significant step toward practicality,
since services in Cloud Computing are not limited to archive or backup data only.
While prior works on ensuring remote data integrity often lacks the support of
either public auditability or dynamic data operations, this paper achieves both.
We first identify the difficulties and potential security problems of direct
extensions with fully dynamic data updates from prior works and then show how
to construct an elegant verification scheme for the seamless integration of these
two salient features in our design. In particular, to achieve efficient data
dynamics, we improve the existing proof of storage models by manipulating block
tag authentication. To support efficient handling of multiple auditing tasks, we
further explore the technique of signature to extend our main result into a multi -
user setting, where TPA can perform multiple auditing tasks simultaneously.
Extensive security and performance analysis show that the proposed schemes are
highly efficient and provably secure.
Title: Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud
Cloud computing companies have started to integrate frameworks for parallel
data processing in their product portfolio, making it easy for customers to access
these services and to deploy their programs. However, the processing frameworks
which are currently used have been designed for static, homogeneous cluster
setups and disregard the particular nature of a cloud. Consequently, the allocated
compute resources may be inadequate for big parts of the submitted job and
unnecessarily increase processing time and cost. In this paper we discuss the
opportunities and challenges for efficient parallel data processing in clouds and
present our research project. It is the first data processing framework to
explicitly exploit the dynamic resource allocation offered b y today’s IaaS clouds
for both, task scheduling and execution. Particular tasks of a processing job can
be assigned to different types of virtual machines which are automatically
instantiated and terminated during the job execution.
6. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Exploring Application-Level Semantics for Data Compression
Natural phenomena show that many creatures form large social groups and move
in regular patterns. However, previous works focus on finding the movement
patterns of each single object or all objects. In this paper, we first propose an
efficient distributed mining algorithm to jointly identify a group of moving
objects and discover their movement patterns in wireless sensor networks.
Afterward, we propose a compression algorithm, called 2P2D, which exploits the
obtained group movement patterns to reduce the amount of delivered data.
The compression algorithm includes a sequence merge and an entropy reduction
phases. In the sequence merge phase, we propose a Merge algorithm to merge and
compress the location data of a group of moving objects. In the entropy reduction
phase, we formulate a Hit Item Replacement (HIR) problem and propose a
Replace algorithm that obtains the optimal solution. Moreover, we devise three
replacement rules and derive the maxim um compression ratio. The experimental
results show that the proposed compression algorithm leverages the group
movement patterns to reduce the amount of delivered data effectively and
efficiently.
Title: Improving Aggregate Recommendation Diversity Using Ranking-Based Techniques
Recommender systems are becoming increasingly important to
individual users and businesses for providing personalized recommendations.
However, while the majority of algorithms proposed in recommender systems
literature have focused on improving recommendation accuracy, other important
aspects of recommendation quality, such as the diversity of recommendations,
have often been overlooked. In this paper, we introduce and explore a number of
item ranking techniques that can generate recommendations that have
substantially higher aggregate diversity across all users while maintaining
comparable levels of recommendation accuracy. Comprehensive empirical
evaluation consistently shows the diversity gains of the propos ed techniques
using several real-world rating datasets and different rating prediction
algorithms.
7. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Jamming-Aware Traffic Allocation for Multiple-Path Routing Using Portfolio Selection
Multiple-path source routing protocols allow a data source node to
distribute the total traffic among available paths. In this article, we consider the
problem of jamming-aware source routing in which the source node performs
traffic allocation based on empirical jamming statistics at individual network
nodes. We formulate this traffic allocation as a lossy network flow optimization
problem using portfolio selection theory from financial statistics. We show that in
multi-source networks, this centralized optimization problem can be solved using
a distributed algorithm based on decomposition in network utility maximization
(NUM). We demonstrate the network’s ability to estimate the impact of jamming
and incorporate these estimates into the traffic allocation problem. Finally, we
simulate the achievable throughput using our proposed traffic allocation method
in several scenarios.
Title: Live Streaming with Receiver-based Peer-division Multiplexing
A number of commercial peer-to-peer systems for live streaming have been
introduced in recent years. The behavior of these popular systems has been
extensively studied in several measurement papers. Due to the proprietary nature
of these commercial systems, however, these studies have to rely on a “black -box”
approach, where packet traces are collected from a sing le or a limited number of
measurement points, to infer various properties of traffic on the control and data
planes. Although such studies are useful to compare different systems from end -
user’s perspective, it is difficult to intuitively understand the ob served properties
without fully reverse-engineering the underlying systems. In this paper we
describe the network architecture of Zattoo, one of the largest production live
streaming providers in Europe at the time of writing, and present a large -scale
measurement study of Zattoo using data collected by the provider. To highlight,
we found that even when the Zattoo system was heavily loaded with as high as
20,000 concurrent users on a single overlay, the median channel join delay
remained less than 2 to 5 seconds, and that, for a majority of users, the streamed
signal lags over-the-air broadcast signal by no more than 3 seconds.
Title: Monitoring Service Systems from a Language-Action Perspective
The Exponential growth in the global economy is being sup ported by service
systems, realized by recasting mission -critical application services accessed
across organizational boundaries. Language-Action Perspective (LAP) is based
upon the notion as proposed that "expert behavior requires an exquisite
sensitivity to context and that such sensitivity is more in the realm of the human
than in that of the artificial.
8. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Business processes are increasingly distributed and open, making them prone to
failure. Monitoring is, therefore, an important concern not only for the processes
themselves but also for the services that comprise these processes. We present a
framework for multilevel monitoring of these service systems. It formalizes
interaction protocols, policies, and commitments that account for standard and
extended effects following the language -action perspective, and allows
specification of goals and monitors at vari ed abstraction levels. We demonstrate
how the framework can be implemented and evaluate it with multiple scenarios
like between merchant and customer transaction that include specifying and
monitoring open-service policy commitments.
Title: Network Coding Based Privacy Preservation against Traffic Analysis in Multi-hop Wireless
Networks
Privacy threat is one of the critical issues in multihop wireless
networks, where attacks such as traffic analysis and flow tracing can be easi ly
launched by a malicious adversary due to the open wireless medium. Network
coding has the potential to thwart these attacks since the coding/mixing
operation is encouraged at intermediate nodes. However, the simple deployment
of network coding cannot achieve the goal once enough packets are collected by
the adversaries. On the other hand, the coding/mixing nature precludes the
feasibility of employing the existing privacy -preserving techniques, such as Onion
Routing. In this paper, we propose a novel net work coding based privacy-
preserving scheme against traffic analysis in multichip wireless networks. With
homomorphism encryption, the proposed scheme offers two significant privacy -
preserving features, packet flow intractability and message content
confidentiality, for efficiently thwarting the traffic analysis attacks. Moreover,
the proposed scheme keeps the random coding feature. Theoretical analysis and
simulative evaluation demonstrate the validity and efficiency of the proposed
scheme.
9. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: One Size Does Not Fit All Towards User- and Query-Dependent Ranking For Web Databases
With the emergence of the deep Web, searching Web databases in domains such
as vehicles, real estate, etc. has become a routine task. One of the problems in this
context is ranking the results of a user query. Earlier approaches for addressing
this problem have used frequencies of database values, query logs, and user
profiles. A common thread in most of these approaches is that ranking is done in
a user- and/or query-independent manner. This paper proposes a novel query-
and user-dependent approach for ranking query results in Web databases. We
present a ranking model, based on two complementary notions of user and query
similarity, to derive a ranking function for a given user query. This fu nction is
acquired from a sparse workload comprising of several such ranking functions
derived for various user-query pairs. The model is based on the intuition that
similar users display comparable ranking preferences over the results of similar
queries. We define these similarities formally in alternative ways and discuss
their effectiveness analytically and experimentally over two distinct Web
databases.
Title: Optimal service pricing for a cloud cache
Cloud applications that offer data management services are emerging. Such
clouds support caching of data in order to provide quality query services. The
users can query the cloud data, paying the price for the infrastructure they use.
Cloud management necessitates an economy that manages the service o f multiple
users in an efficient, but also, resource economic way that allows for cloud profit.
Naturally, the maximization of cloud profit given some guarantees for user
satisfaction presumes an appropriate price -demand model that enables optimal
pricing of query services. The model should be plausible in that it reflects the
correlation of cache structures involved in the queries. Optimal pricing is
achieved based on a dynamic pricing scheme that adapts to time changes. This
paper proposes a novel price-demand model designed for a cloud cache and a
dynamic pricing scheme for queries executed in the cloud cache. The pricing
solution employs a novel method that estimates the correlations of the cache
services in an time-efficient manner. The experimental stu dy shows the efficiency
of the solution.
10. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Optimal Stochastic Location Updates In Mobile Ad Hoc Networks
We consider the location service in a mobile ad -hoc network (MANET), where
each node needs to maintain its location information by 1) frequently updating
its location information within its neighboring region, which is called
neighborhood update (NU), and 2) occasionally updating its location information
to certain distributed location server in the network, which is called location
server update (LSU). The tradeoff between the operation costs in location updates
and the performance losses of the target application due to location inaccuracies
(i.e., application costs) imposes a crucial question for nodes to decide the optimal
strategy to update their location information, where the optimality is in the sense
of minimizing the overall costs. In this paper, we develop a stochastic sequential
decision framework to analyze this problem. Under a Markovian mobility model,
the location update decision problem is modeled as a Markov Decision Process
(MDP). We first investigate the monotonicity properties of optimal NU and LSU
operations with respect to location inaccuracies under a general cost setting.
Then, given a separable cost structure, we s how that the location update
decisions of NU and LSU can be independently carried out without loss of
optimality, i.e., a separation property. From the discovered separation property of
the problem structure and the monotonicity properties of optimal actio ns, we find
that 1) there always exists a simple optimal threshold -based update rule for LSU
operations; 2) for NU operations, an optimal threshold -based update rule exists in
a low-mobility scenario. In the case that no a priori knowledge of the MDP model
is available, we also introduce a practical model -free learning approach to find a
near-optimal solution for the problem.
Title: Personalized Ontology Model for Web Information Gathering
As a model for knowledge description and formalization, ontolo gies are widely used
to represent user profiles in personalized web information gathering. However,
when representing user profiles, many models have utilized only knowledge from
either a global knowledge base or user local information. In this paper, a
personalized ontology model is proposed for knowledge representation and
reasoning over user profiles. This model learns ontological user profiles from both
a world knowledge base and user local instance repositories. The ontology model is
evaluated by comparing it against benchmark models in web information gathering.
The results show that this ontology model is successful .
Title: Privacy-Preserving Updates to Anonymous and Confidential Databases
11. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Suppose Alice owns a k-anonymous database and needs to determine whether her
database, when inserted with a tuple owned by Bob, is still k -anonymous. Also,
suppose that access to the database is strictly controlled, because for example
data are used for certain experiments that need to be maintained confidenti al.
Clearly, allowing Alice to directly read the contents of the tuple breaks the
privacy of Bob (e.g., a patient’s medical record); on the other hand, the
confidentiality of the database managed by Alice is violated once Bob has access
to the contents of the database. Thus, the problem is to check whether the
database inserted with the tuple is still k -anonymous, without letting Alice and
Bob know the contents of the tuple and the database respectively. In this paper,
we propose two protocols solving this problem on suppression-based and
generalization-based k-anonymous and confidential databases. The protocols rely
on well-known cryptographic assumptions, and we provide theoretical analyses to
proof their soundness and experimental results to illustrate th eir efficiency.
Title: Publishing Search Logs – A Comparative Study of Privacy Guarantees
Search engine companies collect the “database of intentions”, the histories of
their users’ search queries. These search logs are a gold mine for researchers.
Search engine companies, however, are wary of publishing search logs in order
not to disclose sensitive information. In this paper we analyze algorithms for
publishing frequent keywords, queries and clicks of a search log. We first show
how methods that achieve variants of k-anonymity are vulnerable to active
attacks. We then demonstrate that the stronger guarantee ensured by differential
privacy unfortunately does not provide any utility for this problem. Our paper
concludes with a large experimental study using real applications where we
compare ZEALOUS and previous work that achieves k-anonymity in search log
publishing. Our results show that ZEALOUS yields comparable utility to
k−anonymity while at the same time achieving much stronger priva cy guarantees.
Title: Robust Correlation of Encrypted Attack Traffic through Stepping Stones by Flow Watermarking
Network based intruders seldom attack their victims directly from their own
computer. Often, they stage their attacks through intermediate “stepping stones”
in order to conceal their identity and origin. To identify the source of the attack
behind the stepping stone(s), it is necessary to correlate the incoming and
outgoing flows or connections of a stepping stone. To resist attempts at
correlation, the attacker may encrypt or otherwise manipulate the connection
traffic. Timing based correlation approaches have been shown to be quite
effective in correlating encrypted connections. However, timing based correlation
approaches are subject to timing perturbations that may be deliberately
12. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
introduced by the attacker at stepping stones. In this project, our watermark -
based approach is “active” in that
It embeds a unique watermark into the encrypted flows by slightly adjusting the
timing of selected packets. The unique watermark that is embedded in the
encrypted flow gives us a number of advantages over passive timing based
correlation in resisting timing perturbations by the attacker. A two-fold
monotonically increasing compound mapping i s created and proved to yield more
distinctive visible watermarks in the watermarked image. Security protection
measures by parameter and mapping randomizations have also been proposed to
deter attackers from illicit image recoveries.
Title: Scalable Learning of Collective Behavior
This study of collective behavior is to understand how individuals behave in a
social networking environment. Oceans of data generated by social media like
Face book, Twitter, Flicker, and YouTube present opportu nities and challenges to
study collective behavior on a large scale. In this work, we aim to learn to predict
collective behavior in social media. In particular, given information about some
individuals, how can we infer the behavior of unobserved individu als in the same
network? A social-dimension-based approach has been shown effective in
addressing the heterogeneity of connections presented in social media. However,
the networks in social media are normally of colossal size, involving hundreds of
thousands of actors. The scale of these networks entails scalable learning of
models for collective behavior prediction. To address the scalability issue, we
propose an edge-centric clustering scheme to extract sparse social dimensions.
With sparse social dimensions, the proposed approach can efficiently handle
networks of millions of actors while demonstrating a comparable prediction
performance to other non-scalable methods.
13. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: The Awareness Network, To Whom Should I Display My Actions And, Whose Actions Should I
Monitor
The concept of awareness plays a pivotal role in research in Computer -Supported
Cooperative Work. Recently, Software Engineering researchers interested in the
collaborative nature of software development have explored the implications of
this concept in the design of software development tools. A critical aspect of
awareness is the associated coordinative work practices of displaying and
monitoring actions. This aspect concerns how colleagues monitor one another’s
actions to understand how these actions impact their own work and how they
display their actions in such a way that others can easily monitor them while
doing their own work. In this paper, we focus on an additional aspect of
awareness: the identification of the social actors who should be monitored and
the actors to whom their actions should be displayed. We address this aspect by
presenting software developers’ work practices based on ethnographic data from
three different software development teams. In addition, we illustrate how these
work practices are influenced by different factors, including the organizational
setting, the age of the project, and the software architecture. We discuss how our
results are relevant for both CSCW and Software Engineering researchers.
Title: The Awareness Network, To Whom Should I Display My Actions And, Whose Actions Should I
Monitor
This study of collective behavior is to understand how individuals behave in a
social networking environment. Oceans of data generated by social media like
Face book, Twitter, Flicker, and YouTube present opportunities and challenges to
study collective behavior on a large scale. In this work, we aim to learn to predict
collective behavior in social media. In particular, given information about some
individuals, how can we infer the behavior of unobserved individuals in the same
network? A social-dimension-based approach has been shown effective in
addressing the heterogeneity of conn ections presented in social media. However,
the networks in social media are normally of colossal size, involving hundreds of
thousands of actors. The scale of these networks entails scalable learning of
models for collective behavior prediction. To addres s the scalability issue, we
propose an edge-centric clustering scheme to extract sparse social dimensions.
With sparse social dimensions, the proposed approach can efficiently handle
networks of millions of actors while demonstrating a comparable predictio n
performance to other non-scalable methods.
14. Core IEEE Projects (Division of Conceptz)
#108,5th Main, 4th Cross, Hanumanth Nagar, Basavanagudi, Bangalore-50, Website:
www.coreieeeprojects.com contact: 9535052050
Title: Throughput Optimization in High Speed Downlink Packet Access (HSDPA)
In this paper, we investigate throughput optimization in High Speed Downlink
Packet Access (HSDPA). Specifically, we propose offl ine and online algorithms for
adjusting the Channel Quality Indicator (CQI) used by the network to schedule
data transmission. In the offline algorithm, a given target BLER is achieved by
adjusting CQI based on ACK/NAK history. By sweeping through differen t target
BLERs, we can find the throughput optimal BLER offline. This algorithm could be
used not only to optimize throughput but also to enable fair resource allocation
among mobile users in HSDPA. In the online algorithm, the CQI offset is adapted
using an estimated short term throughput gradient without specifying a target
BLER. An adaptive stepsize mechanism is proposed to track temporal variation of
the environment. We investigate convergence behavior of both algorithms.
Simulation results show that the proposed offline algorithm can achieve the given
target BLER with good accuracy. Both algorithms yield up to 30% HSDPA
throughput improvement over that with 10% target BLER.
Title: USHER Improving Data Quality with Dynamic Forms
Data quality is a critical problem in modern databases. Data entry forms present
the first and arguably best opportunity for detecting and mitigating errors, but
there has been little research into automatic methods for improving data quality
at entry time. In this paper, we propose USHER, an end-to-end system for form
design, entry, and data quality assurance. Using previous form submissions,
USHER learns a probabilistic model over the questions of the form. USHER then
applies this model at every step of t he data entry process to improve data quality.
Before entry, it induces a form layout that captures the most important data
values of a form instance as quickly as possible. During entry, it dynamically
adapts the form to the values being entered, and enab les real-time feedback to
guide the data enterer toward their intended values. After entry, it re -asks
questions that it deems likely to have been entered incorrectly. We evaluate all
three components of USHER using two real-world data sets. Our results
demonstrate that each component has the potential to improve data quality
considerably, at a reduced cost when compared to current practice.