1. SEABIRDS
IEEE 2012 – 2013
SOFTWARE PROJECTS IN
VARIOUS DOMAINS
| JAVA | J2ME | J2EE |
DOTNET |MATLAB |NS2 |
SBGC SBGC
24/83, O Block, MMDA COLONY 4th FLOOR SURYA COMPLEX,
ARUMBAKKAM SINGARATHOPE BUS STOP,
CHENNAI-600106 OLD MADURAI ROAD, TRICHY- 620002
Web: www.ieeeproject.in
E-Mail: ieeeproject@hotmail.com
Trichy Chennai
Mobile:- 09003012150 Mobile:- 09944361169
Phone:- 0431-4012303
2. SBGC Provides IEEE 2012-2013 projects for all Final Year Students. We do assist the students
with Technical Guidance for two categories.
Category 1 : Students with new project ideas / New or Old
IEEE Papers.
Category 2 : Students selecting from our project list.
When you register for a project we ensure that the project is implemented to your fullest
satisfaction and you have a thorough understanding of every aspect of the project.
SBGC PROVIDES YOU THE LATEST IEEE 2012 PROJECTS / IEEE 2013 PROJECTS
FOR FOLLOWING DEPARTMENT STUDENTS
B.E, B.TECH, M.TECH, M.E, DIPLOMA, MS, BSC, MSC, BCA, MCA, MBA, BBA, PHD,
B.E (ECE, EEE, E&I, ICE, MECH, PROD, CSE, IT, THERMAL, AUTOMOBILE,
MECATRONICS, ROBOTICS) B.TECH(ECE, MECATRONICS, E&I, EEE, MECH , CSE, IT,
ROBOTICS) M.TECH(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWER
ELECTRONICS, COMPUTER SCIENCE, SOFTWARE ENGINEERING, APPLIED
ELECTRONICS, VLSI Design) M.E(EMBEDDED SYSTEMS, COMMUNICATION
SYSTEMS, POWER ELECTRONICS, COMPUTER SCIENCE, SOFTWARE
ENGINEERING, APPLIED ELECTRONICS, VLSI Design) DIPLOMA (CE, EEE, E&I, ICE,
MECH,PROD, CSE, IT)
MBA(HR, FINANCE, MANAGEMENT, HOTEL MANAGEMENT, SYSTEM
MANAGEMENT, PROJECT MANAGEMENT, HOSPITAL MANAGEMENT, SCHOOL
MANAGEMENT, MARKETING MANAGEMENT, SAFETY MANAGEMENT)
We also have training and project, R & D division to serve the students and make them job
oriented professionals
3. PROJECT SUPPORTS AND DELIVERABLES
Project Abstract
IEEE PAPER
IEEE Reference Papers, Materials &
Books in CD
PPT / Review Material
Project Report (All Diagrams & Screen
shots)
Working Procedures
Algorithm Explanations
Project Installation in Laptops
Project Certificate
4. JAVA DATAMINING
1. A Framework for Personal Mobile Commerce Pattern Mining and Prediction
Abstract
Due to a wide range of potential applications, research on mobile commerce has
received a lot of interests from both of the industry and academia. Among them, one
of the active topic areas is the mining and prediction of users’ mobile commerce
behaviors such as their movements and purchase transactions. In this paper, we
propose a novel framework, called Mobile Commerce Explorer (MCE), for mining and
prediction of mobile users’ movements and purchase transactions under the context of
mobile commerce. The MCE framework consists of three major components: 1)
Similarity Inference Model (SIM) for measuring the similarities among stores and items,
which are two basic mobile commerce entities considered in this paper; 2) Personal
Mobile Commerce Pattern Mine (PMCP-Mine) algorithm for efficient discovery of
mobile users’ Personal Mobile Commerce Patterns (PMCPs); and 3) Mobile Commerce
Behavior Predictor (MCBP) for prediction of possible mobile user behaviors. To our best
knowledge, this is the first work that facilitates mining and prediction of mobile users’
commerce behaviors in order to recommend stores and items previously unknown to a
user. We perform an extensive experimental evaluation by simulation and show that our
proposals produce excellent results.
2. Efficient Extended Boolean Retrieval
Abstract
Extended Boolean retrieval (EBR) models were proposed nearly three decades ago, but
have had little practical impact, despite their significant advantages compared to
either ranked keyword or pure Boolean retrieval. In particular,EBR models produce
meaningful rankings; their query model allows the representation of complex concepts
in an and-or format; and they are scrutable, in that the score assigned to a document
depends solely on the content of that document, unaffected by any collection
statistics or other external factors. These characteristics make EBR models attractive in
domains typified by medical and legal searching, where the emphasis is on iterative
development of reproducible complex queries of dozens or even hundreds of terms.
However, EBR is much more computationally expensive than the alternatives. We
consider the implementation of the p-norm approach to EBR, and demonstrate that
5. ideas used in the max-score and wand exact optimization techniques for ranked
keyword retrieval can be adaptedto allow selective bypass of documents via a low-
cost screening process for this and similar retrieval models. We also propose term
independent bounds that are able to further reduce the number of score calculations
for short, simple queries under the extended Boolean retrieval model. Together, these
methods yield an overall saving from 50 to 80percent of the evaluation cost on test
queries drawn from biomedical search.
3. Improving Aggregate Recommendation Diversity Using Ranking-Based Techniques
Abstract
Recommender systems are becoming increasingly important to individual users and
businesses for providingpersonalized recommendations. However, while the majority of
algorithms proposed in recommender systems literature have focused on improving
recommendation accuracy (as exemplified by the recent Netflix Prize competition) ,
other important aspects of recommendation quality, such as the diversity of
recommendations, have often been overlooked. In this paper, we introduce and
explore a number of item ranking techniques that can generate recommendations
that have substantially higher aggregate diversity across all users while maintaining
comparable levels of recommendation accuracy. Comprehensive empirical
evaluation consistently shows the diversity gains of the proposed techniques using
several real-world rating datasets and different rating prediction algorithms.
4. BibPro: A Citation Parser Based on Sequence Alignment
Dramatic increase in the number of academic publications has led to growing
demand for efficient organization of the resources to meet researchers' needs. As a
result, a number of network services have compiled databases from the public
resources scattered over the Internet. However, publications by different conferences
and journals adopt different citation styles. It is an interesting problem to accurately
extract metadata from a citation string which is formatted in one of thousands of
different styles. It has attracted a great deal of attention in research in recent years. In
this paper, based on the notion of sequence alignment, we present a citation parser
called BibPro that extracts components of a citation string. To demonstrate the efficacy
6. of BibPro, we conducted experiments on three benchmark data sets. The results show
that BibPro achieved over 90 percent accuracy on each benchmark. Even with
citations and associated metadata retrieved from the web as training data, our
experiments show that BibPro still achieves a reasonable performance
5. Extending Attribute Information for Small Data Set Classification
Data quantity is the main issue in the small data set problem, because usually
insufficient data will not lead to a robust classification performance. How to extract
more effective information from a small data set is thus of considerable interest. This
paper proposes a new attribute construction approach which converts the original
data attributes into a higher dimensional feature space to extract more attribute
information by a similarity-based algorithm using the classification-oriented fuzzy
membership function. Seven data sets with different attribute sizes are employed to
examine the performance of the proposed method. The results show that the proposed
method has a superior classification performance when compared to principal
component analysis (PCA), kernel principal component analysis (KPCA), and kernel
independent component analysis (KICA) with a Gaussian kernel in the support vector
machine (SVM) classifier
6. Horizontal Aggregations in SQL to Prepare Data Sets for Data Mining Analysis
Preparing a data set for analysis is generally the most time consuming task in a data
mining project, requiring many complex SQL queries, joining tables, and aggregating
columns. Existing SQL aggregations have limitations to prepare data sets because they
return one column per aggregated group. In general, a significant manual effort is
required to build data sets, where a horizontal layout is required. We propose simple,
yet powerful, methods to generate SQL code to return aggregated columns in a
horizontal tabular layout, returning a set of numbers instead of one number per row. This
new class of functions is called horizontal aggregations. Horizontal aggregations build
data sets with a horizontal denormalized layout (e.g., point-dimension,
observationvariable, instance-feature), which is the standard layout required by most
data mining algorithms. We propose three fundamental methods to evaluate horizontal
aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on
standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator,
7. which is offered by some DBMSs. Experiments with large tables compare the proposed
query evaluation methods. Our CASE method has similar speed to the PIVOT operator
and it is much faster than the SPJ method. In general, the CASE and PIVOT methods
exhibit linear scalability, whereas the SPJ method does
7. Enabling Multilevel Trust in Privacy Preserving Data Mining
Privacy Preserving Data Mining (PPDM) addresses the problem of developing accurate
models about aggregated data without access to precise information in individual
data record. A widely studied perturbation-based PPDM approach introduces random
perturbation to individual values to preserve privacy before data are published.
Previous solutions of this approach are limited in their tacit assumption of single-level
trust on data miners. In this work, we relax this assumption and expand the scope of
perturbation-based PPDM to Multilevel Trust (MLT-PPDM). In our setting, the more trusted
a data miner is, the less perturbed copy of the data it can access. Under this setting, a
malicious data miner may have access to differently perturbed copies of the same
data through various means, and may combine these diverse copies to jointly infer
additional information about the original data that the data owner does not intend to
release. Preventing such diversity attacks is the key challenge of providing MLT-PPDM
services. We address this challenge by properly correlating perturbation across copies
at different trust levels. We prove that our solution is robust against diversity attacks with
respect to our privacy goal. That is, for data miners who have access to an arbitrary
collection of the perturbed copies, our solution prevent them from jointly reconstructing
the original data more accurately than the best effort using any individual copy in the
collection. Our solution allows a data owner to generate perturbed copies of its data
for arbitrary trust levels ondemand.This feature offers data owners maximum flexibility.
8.Using Rule Ontology in Repeated Rule Acquisition from Similar Web Sites
Inferential rules are as essential to the Semantic Web applications as ontology.
Therefore, rule acquisition is also an important issue, and the Web that implies
inferential rules can be a major source of rule acquisition. We expect that it will be
easier to acquire rules from a site by using similar rules of other sites in the same domain
rather than starting from scratch. We proposed an automatic rule acquisition
procedure using a rule ontology RuleToOnto, which represents information about the
rule components and their structures. The rule acquisition procedure consists of the rule
8. component identification step and the rule composition step. We developed A*
algorithm for the rule composition and we performed experiments demonstrating that
our ontology-based rule acquisition approach works in a real-world application.
DOTNET DATAMINING
1. Slicing: A New Approach for Privacy Preserving Data Publishing Dotnet
Several anonymization techniques, such as generalization and bucketization, have
been designed for privacy preserving
microdata publishing. Recent work has shown that generalization loses considerable
amount of information, especially for highdimensional data. Bucketization, on the other
hand, does not prevent membership disclosure and does not apply for data that do
not have a clear separation between quasi-identifying attributes and sensitive
attributes. In this paper, we present a novel technique called slicing, which partitions
the data both horizontally and vertically. We show that slicing preserves better data
utility than generalization and can be used for membership disclosure protection.
Another important advantage of slicing is that it can handle high-dimensional data. We
show how slicing can be used for attribute disclosure protection and develop an
efficient algorithm for computing the sliced data that obey the ‘-diversity requirement.
Our workload experiments confirm that slicing preserves better utility than
generalization and is more effective than bucketization in workloads involving the
sensitive attribute. Our experiments also demonstrate that slicing can be used to
prevent membership disclosure.
JAVA NETWORKING
1. Latency Equalization as a New Network Service Primitive
9. Abstract
Multiparty interactive network applications such as teleconferencing, network gaming,
and online trading are gaining popularity. In addition to end-to-end latency bounds,
these applications require that the delay difference among multiple clients of the
service is minimized for a good interactive experience. We propose a Latency
EQualization (LEQ) service, which equalizes the perceived latency for all clients
participating in an interactive network application. To effectively implement the
proposed LEQ service, network support is essential. The LEQ architecture uses a few
routers in the network as hubs to redirect packets of interactive applications along
paths with similar end-to-end delay. We first formulate the hub selection problem, prove
its NP-hardness, and provide a greedy algorithm to solve it. Through extensive
simulations, we show that our LEQ architecture significantly reduces delay difference
under different optimization criteria that allow or do not allow compromising the per-
user end-to-end delay. Our LEQ service is incrementally deployable in today’s networks,
requiring just software modifications to edge routers.
2. Adaptive Opportunistic Routing for Wireless Ad Hoc Networks
Abstract
A distributed adaptive opportunistic routing scheme for multihop wireless ad hoc
networks is proposed. The proposed scheme utilizes a reinforcement learning framework
to opportunistically route the packets even in the absence of reliable knowledge about
channel statistics and network model. This scheme is shown to be optimal with respect
to an expected average per-packet reward criterion. The proposed routing scheme
jointly addresses the issues of learning and routing in an opportunistic context, where
the network structure is characterized by the transmission success probabilities. In
particular, this learning framework leads to a stochastic routing scheme that optimally
“explores” and “exploits” the opportunities in the network.
3. Revisiting Dynamic Query Protocols in Unstructured Peer-to-Peer Networks
In unstructured peer-to-peer networks, the average response latency and traffic cost of
a query are two main performance metrics. Controlled-flooding resource query
10. algorithms are widely used in unstructured networks such as peer-to-peer networks. In
this paper, we propose a novel algorithm named Selective Dynamic Query (SDQ).
Based on mathematical programming, SDQ calculates the optimal combination of an
integer TTL value and a set of neighbors to control the scope of the next query. Our
results demonstrate that SDQ provides finer grained control than other algorithms: its
response latency is close to the well-known minimum one via Expanding Ring; in the
mean time, its traffic cost is also close to the minimum. To our best knowledge, this is the
first work capable of achieving a best trade-off between response latency and traffic
cost.
4.Opportunistic Flow-Level Latency Estimation Using Consistent NetFlow Networking 2012
Java
The inherent measurement support in routers (SNMP counters or NetFlow) is not sufficient
to diagnose performance problems in IP networks, especially for flow-specific problems
where the aggregate behavior within a router appears normal. Tomographic
approaches to detect the location of such problems are not feasible in such cases as
active probes can only catch aggregate characteristics. To address this problem, in this
paper, we propose a Consistent NetFlow (CNF) architecture for measuring per-flow
delay measurements within routers. CNF utilizes the existing NetFlow architecture that
already reports the first and last timestamps per flow, and it proposes hash-based
sampling to ensure that two adjacent routers record the same flows. We devise a novel
Multiflow estimator that approximates the intermediate delay samples from other
background flows to significantly improve the per-flow latency estimates compared to
the naive estimator that only uses actual flow samples. In our experiments using real
backbone traces and realistic delay models, we show that the Multiflow estimator is
accurate with a median relative error of less than 20% for flows of size greater than 100
packets. We also show that Multiflow estimator performs two to three times better than
a prior approach based on trajectory sampling at an equivalent packet sampling rate.
5. Exploiting Excess Capacity to Improve Robustness of WDM Mesh Networks
Excess capacity (EC) is the unused capacity in a network. We propose EC
management techniques to improve network performance. Our techniques exploit the
EC in two ways. First, a connection preprovisioning algorithm is used to reduce the
connection setup time. Second, whenever possible, we use protection schemes that
11. have higher availability and shorter protection switching time. Specifically, depending
on the amount of EC available in the network, our proposed EC management
techniques dynamically migrate connections between high-availability, high-backup-
capacity protection schemes and low-availability, low-backup-capacity protection
schemes. Thus, multiple protection schemes can coexist in the network. The four EC
management techniques studied in this paper differ in two respects: when the
connections are migrated from one protection scheme to another, and which
connections are migrated. Specifically, Lazy techniques migrate connections only
when necessary, whereas Proactive techniques migrate connections to free up
capacity in advance. Partial Backup Reprovisioning (PBR) techniques try to migrate a
minimal set of connections, whereas Global Backup Reprovisioning (GBR) techniques
migrate all connections. We develop integer linear program (ILP) formulations and
heuristic algorithms for the EC management techniques. We then present numerical
examples to illustrate how the EC management techniques improve network
performance by exploiting the EC in wavelength-division-multiplexing (WDM) mesh
networks
6. Leveraging a Compound Graph-Based DHT for Multi-Attribute Range Queries with
Performance Analysis
Resource discovery is critical to the usability and accessibility of grid computing systems.
Distributed Hash Table (DHT) has been applied to grid systems as a distributed
mechanism for providing scalable range-query and multi-attribute resource discovery.
Multi-DHT-based approaches depend on multiple DHT networks with each network
responsible for a single attribute. Single-DHT-based approaches keep the resource
information of all attributes in a single node. Both classes of approaches lead to high
overhead. In this paper, we propose a Low-Overhead Range-query Multi-attribute
(LORM) DHT-based resource discovery approach. Unlike other DHT-based approaches,
LORM relies on a single compound graph-based DHT network and distributes resource
information among nodes in balance by taking advantage of the compound graph
structure. Moreover, it has high capability to handle the large-scale and dynamic
characteristics of resources in grids. Experimental results demonstrate the efficiency of
LORM in comparison with other resource discovery approaches. LORM dramatically
reduces maintenance and resource discovery overhead. In addition, it yields significant
improvements in resource location efficiency. We also analyze the performance of the
LORM approach rigorously by comparing it with other multi-DHT-based and single-DHT-
based approaches with respect to their overhead and efficiency. The analytical results
are consistent with experimental results, and prove the superiority of the LORM
approach in theory
12. JAVA IMAGE PROCESSING
1. Abrupt Motion Tracking Via Intensively Adaptive Markov-Chain Monte Carlo
Sampling
The robust tracking of abrupt motion is a challenging task in computer vision due to its
large motion uncertainty. While various particle filters and conventional Markov-chain
Monte Carlo (MCMC) methods have been proposed for visual tracking, these methods
often suffer from the well-known local-trap problem or from poor convergence rate. In
this paper, we propose a novel sampling-based tracking scheme for the abrupt motion
problem in the Bayesian filtering framework. To effectively handle the local-trap
problem, we first introduce the stochastic approximation Monte Carlo (SAMC) sampling
method into the Bayesian filter tracking framework, in which the filtering distribution is
adaptively estimated as the sampling proceeds, and thus, a good approximation to
the target distribution is achieved. In addition, we propose a new MCMC sampler with
intensive adaptation to further improve the sampling efficiency, which combines a
density-grid-based predictive model with the SAMC sampling, to give a proposal
adaptation scheme. The proposed method is effective and computationally efficient in
addressing the abrupt motion problem. We compare our approach with several
alternative tracking algorithms, and extensive experimental results are presented to
demonstrate the effectiveness and the efficiency of the proposed method in dealing
with various types of abrupt motions.
2.Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks
We present an automatic vehicle detection system for aerial surveillance in this paper.
In this system, we escape from the stereotype and existing frameworks of vehicle
detection in aerial surveillance, which are either region based or sliding window based.
We design a pixelwise classification method for vehicle detection. The novelty lies in the
fact that, in spite of performing pixelwise classification, relations among neighboring
pixels in a region are preserved in the feature extraction process.We consider features
including vehicle colors and local features. For vehicle color extraction, we utilize a
13. color transform to separate vehicle colors and nonvehicle colors effectively. For edge
detection, we apply moment preserving to adjust the thresholds of the Canny edge
detector automatically, which increases the adaptability and the accuracy for
detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is
constructed for the classification purpose. We convert regional local features into
quantitative observations that can be referenced when applying pixelwise
classification via DBN. Experiments were conducted on a wide variety of aerial videos.
The results demonstrate flexibility and good generalization abilities of the proposed
method on a challenging data set with aerial surveillance images taken at different
heights and under different camera angles.
3.A Secret-Sharing-Based Method for Authentication of Grayscale Document Images
via the Use of the PNG Image With a Data Repair Capability
A new blind authentication method based on the secret sharing technique with a data
repair capability for grayscale document images via the use of the Portable Network
Graphics (PNG) image is proposed. An authentication signal is generated for each
block of a grayscale document image, which, together with the binarized block
content, is transformed into several shares using the Shamir secret sharing scheme. The
involved parameters are carefully chosen so that as many shares as possible are
generated and embedded into an alpha channel plane. The alpha channel plane is
then combined with the original grayscale image to form a PNG image. During the
embedding process, the computed share values are mapped into a range of alpha
channel values near their maximum value of 255 to yield a transparent stego-image
with a disguise effect. In the process of image authentication, an image block is
marked as tampered if the authentication signal computed from the current block
content does not match that extracted from the shares embedded in the alpha
channel plane. Data repairing is then applied to each tampered block by a reverse
Shamir scheme after collecting two shares from unmarked blocks. Measures for
protecting the security of the data hidden in the alpha channel are also proposed.
Good experimental results prove the effectiveness of the proposed method for real
applications.
JAVA NETWORK SECURITY
14. 1. Design and Implementation of TARF: A Trust-Aware Routing Framework for WSNs
The multihop routing in wireless sensor networks (WSNs) offers little protection against
identity deception through replaying routing information. An adversary can exploit this
defect to launch various harmful or even devastating attacks against the routing
protocols, including sinkhole attacks, wormhole attacks, and Sybil attacks. The situation
is further aggravated by mobile and harsh network conditions. Traditional
cryptographic techniques or efforts at developing trust-aware routing protocols do not
effectively address this severe problem. To secure the WSNs against adversaries
misdirecting the multihop routing, we have designed and implemented TARF, a robust
trust-aware routing framework for dynamic WSNs. Without tight time synchronization or
known geographic information, TARF provides trustworthy and energy-efficient route.
Most importantly, TARF proves effective against those harmful attacks developed out of
identity deception; the resilience of TARF is verified through extensive evaluation with
both simulation and empirical experiments on large-scale WSNs under various scenarios
including mobile and RF-shielding network conditions. Further, we have implemented a
low-overhead TARF module in TinyOS; as demonstrated, this implementation can be
incorporated into existing routing protocols with the least effort. Based on TARF, we also
demonstrated a proof-of-concept mobile target detection application that functions
well against an antidetection mechanism.
2. Detecting Spam Zombies by Monitoring Outgoing Messages
Compromised machines are one of the key security threats on the Internet; they are
often used to launch various security attacks such as spamming and spreading
malware, DDoS, and identity theft. Given that spamming provides a key economic
incentive for attackers to recruit the large number of compromised machines, we focus
on the detection of the compromised machines in a network that are involved in the
spamming activities, commonly known as spam zombies. We develop an effective
spam zombie detection system named SPOT by monitoring outgoing messages of a
network. SPOT is designed based on a powerful statistical tool called Sequential
Probability Ratio Test, which has bounded false positive and false negative error rates. In
addition, we also evaluate the performance of the developed SPOT system using a
two-month e-mail trace collected in a large US campus network. Our evaluation studies
show that SPOT is an effective and efficient system in automatically detecting
compromised machines in a network. For example, among the 440 internal IP addresses
observed in the e-mail trace, SPOT identifies 132 of them as being associated with
compromised machines. Out of the 132 IP addresses identified by SPOT, 126 can be
15. either independently confirmed (110) or highly likely (16) to be compromised. Moreover,
only seven internal IP addresses associated with compromised machines in the trace
are missed by SPOT. In addition, we also compare the performance of SPOT with two
other spam zombie detection algorithms based on the number and percentage of
spam messages originated or forwarded by internal machines, respectively, and show
that SPOT outperforms these two detection algorithms.
3. A Hybrid Approach to Private Record Matching Network Security 2012 Java
Real-world entities are not always represented by the same set of features in different
data sets. Therefore, matching records of the same real-world entity distributed across
these data sets is a challenging task. If the data sets contain private information, the
problem becomes even more difficult. Existing solutions to this problem generally follow
two approaches: sanitization techniques and cryptographic techniques. We propose a
hybrid technique that combines these two approaches and enables users to trade off
between privacy, accuracy, and cost. Our main contribution is the use of a blocking
phase that operates over sanitized data to filter out in a privacy-preserving manner
pairs of records that do not satisfy the matching condition. We also provide a formal
definition of privacy and prove that the participants of our protocols learn nothing other
than their share of the result and what can be inferred from their share of the result, their
input and sanitized views of the input data sets (which are considered public
information). Our method incurs considerably lower costs than cryptographic
techniques and yields significantly more accurate matching results compared to
sanitization techniques, even when privacy requirements are high.
4. ES-MPICH2: A Message Passing Interface with Enhanced Security Network Security
2012 Java
An increasing number of commodity clusters are connected to each other by public
networks, which have become a potential threat to security sensitive parallel
applications running on the clusters. To address this security issue, we developed a
Message Passing Interface (MPI) implementation to preserve confidentiality of
messages communicated among nodes of clusters in an unsecured network. We focus
on MPI rather than other protocols, because MPI is one of the most popular
communication protocols for parallel computing on clusters. Our MPI implementation—
called ES-MPICH2—was built based on MPICH2 developed by the Argonne National
16. Laboratory. Like MPICH2, ES-MPICH2 aims at supporting a large variety of computation
and communication platforms like commodity clusters and high-speed networks. We
integrated encryption and decryption algorithms into the MPICH2 library with the
standard MPI interface and; thus, data confidentiality of MPI applications can be
readily preserved without a need to change the source codes of the MPI applications.
MPI-application programmers can fully configure any confidentiality services in
MPICHI2, because a secured configuration file in ES-MPICH2 offers the programmers
flexibility in choosing any cryptographic schemes and keys seamlessly incorporated in
ES-MPICH2. We used the Sandia Micro Benchmark and Intel MPI Benchmark suites to
evaluate and compare the performance of ES-MPICH2 with the original MPICH2
version. Our experiments show that overhead incurred by the confidentiality services in
ES-MPICH2 is marginal for small messages. The security overhead in ES-MPICH2 becomes
more pronounced with larger messages. Our results also show that security overhead
can be significantly reduced in ES-MPICH2 by high-performance clusters.
5.Detecting Anomalous Insiders in Collaborative Information Systems
Collaborative information systems (CISs) are deployed within a diverse array of
environments that manage sensitive information. Current security mechanisms detect
insider threats, but they are ill-suited to monitor systems in which users function in
dynamic teams. In this paper, we introduce the community anomaly detection system
(CADS), an unsupervised learning framework to detect insider threats based on the
access logs of collaborative environments. The framework is based on the observation
that typical CIS users tend to form community structures based on the subjects
accessed (e.g., patients’ records viewed by healthcare providers). CADS consists of
two components: 1) relational pattern extraction, which derives community structures
and 2) anomaly prediction, which leverages a statistical model to determine when
users have sufficiently deviated from communities. We further extend CADS into
MetaCADS to account for the semantics of subjects (e.g., patients’ diagnoses). To
empirically evaluate the framework, we perform an assessment with three months of
access logs from a real electronic health record (EHR) system in a large medical center.
The results illustrate our models exhibit significant performance gains over state-of-the-
art competitors. When the number of illicit users is low, MetaCADS is the best model, but
as the number grows, commonly accessed semantics lead to hiding in a crowd, such
that CADS is more prudent.
JAVA MOBILECOMPUTING
17. 1.Handling Selfishness in Replica Allocation over a Mobile Ad Hoc Network
In a mobile ad hoc network, the mobility and resource constraints of mobile nodes may
lead to network partitioning or performance degradation. Several data replication
techniques have been proposed to minimize performance degradation. Most of them
assume that all mobile nodes collaborate fully in terms of sharing their memory space.
In reality, however, some nodes may selfishly decide only to cooperate partially, or not
at all, with other nodes. These selfish nodes could then reduce the overall data
accessibility in the network. In this paper, we examine the impact of selfish nodes in a
mobile ad hoc network from the perspective of replica allocation. We term this selfish
replica allocation. In particular, we develop a selfish node detection algorithm that
considers partial selfishness and novel replica allocation techniques to properly cope
with selfish replica allocation. The conducted simulations demonstrate the proposed
approach outperforms traditional cooperative replica allocation techniques in terms of
data accessibility,
communication cost, and average query delay.
2.Acknowledgment-Based Broadcast Protocol for Reliable and Efficient Data
Dissemination in Vehicular Ad Hoc Networks
We propose a broadcast algorithm suitable for a wide range of vehicular scenarios,
which only employs local information acquired via periodic beacon messages,
containing acknowledgments of the circulated broadcast messages. Each vehicle
decides whether it belongs to a connected dominating set (CDS). Vehicles in the CDS
use a shorter waiting period before possible retransmission. At time-out expiration, a
vehicle retransmits if it is aware of at least one neighbor in need of the message. To
address intermittent connectivity and appearance of new neighbors, the evaluation
timer can be restarted. Our algorithm resolves propagation at road intersections
without any need to even recognize intersections. It is inherently adaptable to different
mobility regimes, without the need to classify network or vehicle speeds. In a thorough
simulation-based performance evaluation, our algorithm is shown to provide higher
reliability and message efficiency than existing approaches for nonsafety applications.
3.Toward Reliable Data Delivery for Highly Dynamic Mobile Ad Hoc Networks
18. This paper addresses the problem of delivering data packets for highly dynamic mobile
ad hoc networks in a reliable and timely manner. Most existing ad hoc routing protocols
are susceptible to node mobility, especially for large-scale networks. Driven by this issue,
we propose an efficient Position-based Opportunistic Routing (POR) protocol which
takes advantage of the stateless property of geographic routing and the broadcast
nature of wireless medium. When a data packet is sent out, some of the neighbor
nodes that have overheard the transmission will serve as forwarding candidates, and
take turn to forward the packet if it is not relayed by the specific best forwarder within a
certain period of time. By utilizing such in-the-air backup, communication is maintained
without being interrupted. The additional latency incurred by local route recovery is
greatly reduced and the duplicate relaying caused by packet reroute is also
decreased. In the case of communication hole, a Virtual Destination-based Void
Handling (VDVH) scheme is further proposed to work together with POR. Both
theoretical analysis and simulation results show that POR achieves excellent
performance even under high node mobility with acceptable overhead and the new
void handling scheme also works well.
4.Protecting Location Privacy in Sensor Networks against a Global Eavesdropper
While many protocols for sensor network security provide confidentiality for the content
of messages, contextual information usually remains exposed. Such contextual
information can be exploited by an adversary to derive sensitive information such as
the locations of monitored objects and data sinks in the field. Attacks on these
components can significantly undermine any network application. Existing techniques
defend the leakage of location information from a limited adversary who can only
observe network traffic in a small region. However, a stronger adversary, the global
eavesdropper, is realistic and can defeat these existing techniques. This paper first
formalizes the location privacy issues in sensor networks under this strong adversary
model and computes a lower bound on the communication overhead needed for
achieving a given level of location privacy. The paper then proposes two techniques to
provide location privacy to monitored objects (source-location privacy)—periodic
collection and source simulation—and two techniques to provide location privacy to
data sinks (sink-location privacy)—sink simulation and backbone flooding. These
techniques provide trade-offs between privacy, communication cost, and latency.
Through analysis and simulation, we demonstrate that the proposed techniques are
efficient and effective for source and sink-location privacy in sensor networks.
19. DOTNET NETWORK SECURITY
1.Revisiting Defenses against Large-Scale Online Password Guessing Attacks
Brute force and dictionary attacks on password-only remote login services are now
widespread and ever increasing.Enabling convenient login for legitimate users while
preventing such attacks is a difficult problem. Automated Turing Tests (ATTs) continue to
be an effective, easy-to-deploy approach to identify automated malicious login
attempts with reasonable cost of inconvenience to users. In this paper, we discuss the
inadequacy of existing and proposed login protocols designed to address large scale
online dictionary attacks (e.g., from a botnet of hundreds of thousands of nodes). We
propose a new Password Guessing Resistant Protocol (PGRP), derived upon revisiting
prior proposals designed to restrict such attacks. While PGRP limits the total number of
login attempts from unknown remote hosts to as low as a single attempt per username,
legitimate users in most cases (e.g., when attempts are made from known, frequently-
used machines) can make several failed login attempts before being challenged with
an ATT. We analyze the performance of PGRP with two real-world data sets and find it
more promising than existing proposals.
2.SecuredTrust: A Dynamic Trust Computation Model for Secured Communication in
Multiagent Systems
Security and privacy issues have become critically important with the fast expansion of
multiagent systems. Most network applications such as pervasive computing, grid
computing, and P2P networks can be viewed as multiagent systems which are open,
anonymous, and dynamic in nature. Such characteristics of multiagent systems
introduce vulnerabilities and threats to providing secured communication. One feasible
way to minimize the threats is to evaluate the trust and reputation of the interacting
agents. Many trust/reputation models have done so, but they fail to properly evaluate
trust when malicious agents start to behave in an unpredictable way. Moreover, these
models are ineffective in providing quick response to a malicious agent’s oscillating
behavior. Another aspect of multiagent systems which is becoming critical for sustaining
good service quality is the even distribution of workload among service providing
agents. Most trust/reputation models have not yet addressed this issue. So, to cope with
20. the strategically altering behavior of malicious agents and to distribute workload as
evenly as possible among service providers; we present in this paper a dynamic trust
computation model called “SecuredTrust.” In this paper, we first analyze the different
factors related to evaluating the trust of an agent and then propose a comprehensive
quantitative model for measuring such trust. We also propose a novel load-balancing
algorithm based on the different factors defined in our model. Simulation results
indicate that our model compared to other existing models can effectively cope with
strategic behavioral change of malicious agents and at the same time efficiently
distribute workload among the service providing agents under stable condition.
3.Enhanced Privacy ID: A Direct Anonymous Attestation Scheme with Enhanced
Revocation Capabilities 2012 Dotnet Network Security
Direct Anonymous Attestation (DAA) is a scheme that enables the remote
authentication of a Trusted Platform Module (TPM) while preserving the user’s privacy. A
TPM can prove to a remote party that it is a valid TPM without revealing its identity and
without linkability. In the DAA scheme, a TPM can be revoked only if the DAA private
key in the hardware has been extracted and published widely so that verifiers obtain
the corrupted private key. If the unlinkability requirement is relaxed, a TPM suspected of
being compromised can be revoked even if the private key is not known. However,
with the full unlinkability requirement intact, if a TPM has been compromised but its
private key has not been distributed to verifiers, the TPM cannot be revoked.
Furthermore, a TPM cannot be revoked from the issuer, if the TPM is found to be
compromised after the DAA issuing has occurred. In this paper, we present a new DAA
scheme called Enhanced Privacy ID (EPID) scheme that addresses the above
limitations. While still providing unlinkability, our scheme provides a method to revoke a
TPM even if the TPM private key is unknown. This expanded revocation property makes
the scheme useful for other applications such as for driver’s license. Our EPID scheme is
efficient and provably secure in the same security model as DAA, i.e., in the random
oracle model under the strong RSA assumption and the decisional Diffie-Hellman
assumption.
4.Extending Attack Graph-Based Security Metrics and Aggregating Their Application
2012 Dotnet Network Security
21. The attack graph is an abstraction that reveals the ways an attacker can leverage
vulnerabilities in a network to violate a security policy. When used with attack graph-
based security metrics, the attack graph may be used to quantitatively assess
securityrelevant aspects of a network. The Shortest Path metric, the Number of Paths
metric, and the Mean of Path Lengths metric are three attack graph-based security
metrics that can extract security-relevant information. However, one’s usage of these
metrics can lead to misleading results. The Shortest Path metric and the Mean of Path
Lengths metric fail to adequately account for the number of ways an attacker may
violate a security policy. The Number of Paths metric fails to adequately account for the
attack effort associated with the attack paths. To overcome these shortcomings, we
propose a complimentary suite of attack graph-based security metrics and specify an
algorithm for combining the usage of these metrics. We present simulated results that
suggest that our approach reaches a conclusion about which of two attack graphs
correspond to a network that is most secure in many instances.
DOTNET NETWORKING
1.Throughput and Energy Efficiency in Wireless Ad Hoc Networks With Gaussian
Channels
This paper studies the bottleneck link capacity under the Gaussian channel model in
strongly connected random wireless ad hoc networks, with n nodes independently and
uniformly distributed in a unit square. We assume that each node is equipped with two
transceivers (one for transmission and one for reception) and allow all nodes to transmit
simultaneously. We draw lower and upper bounds, in terms of bottleneck link capacity,
for homogeneous networks (all nodes have the same transmission power level) and
propose an energy-efficient power assignment algorithm (CBPA) for heterogeneous
networks (nodes may have different power levels), with a provable bottleneck link
capacity guarantee of Ω(Blog(1+1/√nlog2n)), where B is the channel bandwidth. In
addition, we develop a distributed implementation of CBPA with O(n2) message
complexity and provide extensive simulation results
2.Cross-Layer Analysis of the End-to-End Delay Distribution in Wireless Sensor Networks
22. Emerging applications of wireless sensor networks (WSNs) require real-time quality-of-
service (QoS) guarantees to be provided by the network. Due to the nondeterministic
impacts of the wireless channel and queuing mechanisms, probabilistic analysis of QoS
is essential. One important metric of QoS in WSNs is the probability distribution of the
end-to-end delay. Compared to other widely used delay performance metrics such as
the mean delay, delay variance, and worst-case delay, the delay distribution can be
used to obtain the probability to meet a specific deadline for QoS-based
communication in WSNs. To investigate the end-to-end delay distribution, in this paper,
a comprehensive cross-layer analysis framework, which employs a stochastic queueing
model in realistic channel environments, is developed. This framework is generic and
can be parameterized for a wide variety of MAC protocols and routing protocols. Case
studies with the CSMA/CA MAC protocol and an anycast protocol are conducted to
illustrate how the developed framework can analytically predict the distribution of the
end-to-end delay. Extensive test-bed experiments and simulations are performed to
validate the accuracy of the framework for both deterministic and random
deployments. Moreover, the effects of various network parameters on the distribution of
end-to-end delay are investigated through the developed framework. To the best of
our knowledge, this is the first work that provides a generic, probabilistic cross-layer
analysis of end-to-end delay in WSNs.
3.Routing for Power Minimization in the Speed Scaling Model
We study network optimization that considers power minimization as an objective.
Studies have shown that mechanisms such as speed scaling can significantly reduce
the power consumption of telecommunication networks by matching the consumption
of each network element to the amount of processing required for its carried traffic.
Most existing research on speed scaling focuses on a single network element in
isolation. We aim for a network-wide optimization. Specifically, we study a routing
problem with the objective of provisioning guaranteed speed/bandwidth for a given
demand matrix while minimizing power consumption. Optimizing the routes critically
relies on the characteristic of the speed-power curve f(s), which is how power is
consumed as a function of the processing speed s. If f is superadditive, we show that
there is no bounded approximation in general for integral routing, i.e., each traffic
demand follows a single path. This contrasts with the well-known logarithmic
approximation for subadditive functions. However, for common speed-power curves
such as polynomials f(s) = μsα, we are able to show a constant approximation via a
simple scheme of randomized rounding. We also generalize this rounding approach to
handle the case in which a nonzero startup cost σ appears in the speed-power curve,
i.e., f(s) = {σ + μsα, if s >; 0; 0, if s = 0. We present an O((σ/μ)1/α)-approximation, and we
23. discuss why coming up with an approximation ratio independent of the startup cost
may be hard. Finally, we provide simulation results to validate our algorithmic
approaches
JAVA CLOUDCOMPUTING
1.A Gossip Protocol for Dynamic Resource Management in Large Cloud Environments
We address the problem of dynamic resource management for a large-scale cloud
environment. Our contribution includes outlining a distributed middleware architecture
and presenting one of its key elements: a gossip protocol that (1) ensures fair resource
allocation among sites/applications, (2) dynamically adapts the allocation to load
changes and (3) scales both in the number of physical machines and
sites/applications. We formalize the resource allocation problem as that of dynamically
maximizing the cloud utility under CPU and memory constraints. We first present a
protocol that computes an optimal solution without considering memory constraints
and prove correctness and convergence properties. Then, we extend that protocol to
provide an efficient heuristic solution for the complete problem, which includes
minimizing the cost for adapting an allocation. The protocol continuously executes on
dynamic, local input and does not require global synchronization, as other proposed
gossip protocols do. We evaluate the heuristic protocol through simulation and find its
performance to be well-aligned with our design goals
2.Optimization of Resource Provisioning Cost in Cloud Computing
In cloud computing, cloud providers can offer cloud consumers two provisioning plans
for computing resources, namely reservation and on-demand plans. In general, cost of
utilizing computing resources provisioned by reservation plan is cheaper than that
provisioned by on-demand plan, since cloud consumer has to pay to provider in
advance. With the reservation plan, the consumer can reduce the total resource
provisioning cost. However, the best advance reservation of resources is difficult to be
achieved due to uncertainty of consumer's future demand and providers' resource
prices. To address this problem, an optimal cloud resource provisioning (OCRP)
algorithm is proposed by formulating a stochastic programming model. The OCRP
24. algorithm can provision computing resources for being used in multiple provisioning
stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages
in a yearly plan. The demand and price uncertainty is considered in OCRP. In this
paper, different approaches to obtain the solution of the OCRP algorithm are
considered including deterministic equivalent formulation, sample-average
approximation, and Benders decomposition. Numerical studies are extensively
performed in which the results clearly show that with the OCRP algorithm, cloud
consumer can successfully minimize total cost of resource provisioning in cloud
computing environments
3.A Secure Erasure Code-Based Cloud Storage System with Secure Data Forwarding
A cloud storage system, consisting of a collection of storage servers, provides long-term
storage services over the Internet.Storing data in a third party’s cloud system causes
serious concern over data confidentiality. General encryption schemes protect data
confidentiality, but also limit the functionality of the storage system because a few
operations are supported over encrypted data. Constructing a secure storage system
that supports multiple functions is challenging when the storage system is distributed
and has no central authority. We propose a threshold proxy re-encryption scheme and
integrate it with a decentralized erasure code such that a secure distributed storage
system is formulated. The distributed storage system not only supports secure and robust
data storage and retrieval, but also lets a user forward his data in the storage servers to
another user without retrieving the data back. The main technical contribution is that
the proxy re-encryption scheme supports encoding operations over encrypted
messages as well as forwarding operations over encoded and encrypted messages.
Our method fully integrates encrypting, encoding, and forwarding. We analyze and
suggest suitable parameters for the number of copies of a message dispatched to
storage servers and the number of storage servers queried by a key server. These
parameters allow more flexible adjustment between the number of storage servers and
4.HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable Access
Control in Cloud Computing
Cloud computing has emerged as one of the most influential paradigms in the IT
industry in recent years. Since this
25. new computing technology requires users to entrust their valuable data to cloud
providers, there have been increasing security and privacy concerns on outsourced
data. Several schemes employing attribute-based encryption (ABE) have been
proposed for access control of outsourced data in cloud computing; however, most of
them suffer from inflexibility in implementing complex access control policies. In order to
realize scalable, flexible, and fine-grained access control of outsourced data in cloud
computing, in this paper, we propose hierarchical attribute-set-based encryption
(HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a
hierarchical structure of users. The proposed scheme not only achieves scalability due
to its hierarchical structure, but also inherits flexibility and fine-grained access control in
supporting compound attributes of ASBE. In addition, HASBE employs multiple value
assignments for access expiration time to deal with user revocation more efficiently
than existing schemes. We formally prove the security of HASBE based on security of the
ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt et al.
and analyze its performance and computational complexity. We implement our
scheme and show that it is both efficient and flexible in dealing with access control for
outsourced data in cloud computing with comprehensive experiments.
5. A Distributed Access Control Architecture for Cloud Computing
The large-scale, dynamic, and heterogeneous nature of cloud computing poses
numerous security challenges. But the cloud's main challenge is to provide a robust
authorization mechanism that incorporates multitenancy and virtualization aspects of
resources. The authors present a distributed architecture that incorporates principles
from security management and software engineering and propose key requirements
and a design model for the architecture.
6.Cloud Computing Security: From Single to Multi-clouds
The use of cloud computing has increased rapidly in many organizations. Cloud
computing provides many benefits in terms of low cost and accessibility of data.
Ensuring the security of cloud computing is a major factor in the cloud computing
environment, as users often store sensitive information with cloud storage providers but
these providers may be untrusted. Dealing with "single cloud" providers is predicted to
become less popular with customers due to risks of service availability failure and the
possibility of malicious insiders in the single cloud. A movement towards "multi-clouds", or
26. in other words, "interclouds" or "cloud-of-clouds" has emerged recently. This paper
surveys recent research related to single and multi-cloud security and addresses
possible solutions. It is found that the research into the use of multi-cloud providers to
maintain security has received less attention from the research community than has the
use of single clouds. This work aims to promote the use of multi-clouds due to its ability
to reduce security risks that affect the cloud computing user.
DOTNET MOBILE COMPUTING
1.Local Greedy Approximation for Scheduling in Multihop Wireless Networks 2012 Dotnet
Mobile Computing
In recent years, there has been a significant amount of work done in developing low-
complexity scheduling schemes to
achieve high performance in multihop wireless networks. A centralized suboptimal
scheduling policy, called Greedy Maximal Scheduling (GMS) is a good candidate
because its empirically observed performance is close to optimal in a variety of network
settings. However, its distributed realization requires high complexity, which becomes a
major obstacle for practical implementation. In this paper, we develop simple
distributed greedy algorithms for scheduling in multihop wireless networks. We reduce
the complexity by relaxing the global ordering requirement of GMS, up to near zero.
Simulation results show that the new algorithms approximate the performance of GMS,
and outperform the state-of-the-art distributed scheduling policies.
2.Network Assisted Mobile Computing with Optimal Uplink Query Processing 2012
Dotnet Mobile Computing
Many mobile applications retrieve content from remote servers via user generated
queries. Processing these queries is often needed before the desired content can be
identified. Processing the request on the mobile devices can quickly sap the limited
battery resources. Conversely, processing user-queries at remote servers can have slow
response times due communication latency incurred during transmission of the
27. potentially large query. We evaluate a network-assisted mobile computing scenario
where midnetwork nodes with “leasing” capabilities are deployed by a service
provider. Leasing computation power can reduce battery usage on the mobile devices
and improve response times. However, borrowing processing power from mid-network
nodes comes at a leasing cost which must be accounted for when making the decision
of where processing should occur. We study the tradeoff between battery usage,
processing and transmission latency, and mid-network leasing. We use the dynamic
programming framework to solve for the optimal processing policies that suggest the
amount of processing to be done at each mid-network node in order to minimize the
processing and communication latency and processing costs. Through numerical
studies, we examine the properties of the optimal processing policy and the core
tradeoffs in such systems.