SlideShare uma empresa Scribd logo
1 de 9
Baixar para ler offline
Page 1



     SYSTEM-GENERATED REQUESTS FOR REWRITING PROPOSALS
                          Pietro Speroni di Fenizio1, Cyril Velikanov2

       We present an online deliberation system using mutual evaluation in order to col-
       laboratively develop solutions. Participants submit their proposals and evaluate
       each other’s proposals; some of them may then be invited by the system to rewrite
       “problematic” proposals. Two cases are discussed: a proposal supported by
       many, but not by a given person, who is then invited to rewrite it for making yet
       more acceptable; and a poorly presented but presumably interesting proposal.
       The first of these cases has been successfully implemented. Proposals are evalu-
       ated along two axes—understandability (or clarity, or, more generally, quality),
       and agreement. The latter is used by the system to cluster proposals according to
       their ideas, while the former is used both to present the best proposals on top of
       their clusters, and to find poorly written proposals candidates for rewriting. These
       functionalities may be considered as important components of a large scale online
       deliberation system.

1. Introduction
Our study presented in this paper relates to open online deliberation and open online collabo-
ration. Within the context of eParticipation, online deliberation is indeed an essential aspect;
while online collaboration should also be considered as essential if we want eParticipation to
be purposeful, that is, looking for a solution to a problem stated, and productive, that is, yield-
ing result(s) commonly agreed upon.

As eParticipation still remains a rather vague concept, we start with better delimiting the kind
of activities we are considering in this paper. Namely, we will consider that a given ePartici-
pation activity (project, campaign) involves a number of active participants, i.e. participants
who may write their own contributions, but who at least are actively reading and appraising
(aka evaluating) each other’s contributions. A contribution basically is a proposal (of how to
solve the problem being discussed) or a comment on one or more other contributions (propos-
als or comments). Other types of contributions may indeed be considered as well, but we are
mostly interested in proposals, and then, to a lesser extent, in comments.

Hereafter active participants will be simply called participants; while those other people who,
even after registering, do not manifested their presence, simply won’t be considered at all.

1.1 Clustering of Contributions

One of our basic assumptions is that an open online deliberation on an important societal is-
sue, when it is expected to be purposeful and productive, may attract a really large number of

1Pietro Speroni di Fenizio (speroni@dei.uc.pt ), CISUC, Department of Informatics Engineering, University of
Coimbra, Pólo II, 3030-290 Coimbra, Portugal

2 Cyril Velikanov (cvelikanov@gmail.com), “Memorial”, Malyi Karetnyi per. 12, Moscow 127051, Russia, and
PoliTech Institute, 67 Saint Bernard St., Brussels 1060, Belgium
Page 2                                Procedures and Methods for Cross-community Online Deliberation


active participants (in the order of thousands, or tens of thousands or more). In this case, it
will be termed a mass online deliberation (MOD). Within a MOD, a large number of contri-
butions (proposals and comments), all related directly or indirectly to the same problem stated
in advance, will be “put on the table” by participants, requiring their reading, appraisal and
further commenting. Indeed, an average participant, who devotes for this activity just some
part of his/her leisure time, wouldn’t be able to read, understand and meaningfully classify
such a mass of competing proposals and of various comments on them. Hence the need of a
programmed MOD support system to facilitate this task for participants.

However, while considering the task of semantic sorting (clustering) of a large number of
proposals/comments on a given issue, we should state that this task cannot be performed
automatically by any existing algorithm of automated text analysis—simply because it must
analyse not only the content of every proposal (what it is about), but also its intent (what the
author suggests, prefers, wants, or rejects...); for the research into algorithms of the intent
analysis of texts is still at its very beginning.

On the other hand, in an open mass deliberation, semantic clustering of proposals cannot be
commissioned to a staff of “facilitators” external to the deliberating community—because this
would often bias the course of deliberation; and yet more often, if not always, it would arouse
suspicion that such a bias is indeed present. We refer to [1] for a more detailed rationale.

Hence, the task of clustering proposals in a deliberation can only be performed collectively by
the whole deliberating community, and the total number of items to be considered implies that
performance of this task should be distributed among a great number of participants, if not on
all of them, acting in a concerted manner. Such a distributed collective action can only be per-
formed under the guidance of a MOD support system with a suitable clustering algorithm.

1.2 Ranking of Contributions Within a Cluster

The task of clustering proposals should result in grouping together proposals that are similar
or compatible in the view of those participants who have read (and appraised) them. Some of
those clusters may become rather large, thus necessitating a finer analysis. On the other hand,
a given proposal containing some idea may present the idea in a more or less clear, concise
and argumented way. On the opposite end of this “quality scale” we may have many poorly
formulated proposals, down to barely understandable or completely obscure texts. The latter
case taken apart, such a quality ranking, if performed by participants and aggregated by the
support system, can further help participants to navigate across a large number of proposals.

As for texts that have appeared obscure to most of their readers, such a text nevertheless may
have been sufficiently well understood by some of them, who therefore may have been able to
emit a further judgement on the ideas it contains—in a form of a simple (dis)agreement or of
a comment. Indeed, it would be very desirable to spread this understanding onto the rest of
the deliberating community, for a badly formulated idea may nevertheless be valuable and
original. This may need partial or full rewriting of a text that appears obscure or is misunder-
stood.
Procedures and Methods for Cross-community Online Deliberation                               Page 3


1.3 Collaboration by Rewriting Proposals

Now, let us turn to online collaboration—which may be considered either as an integral part
of a purposeful online deliberation, or as a separate task. In the former case, it is however
hardly imaginable that a very large number of participants may be able to productively col-
laborate, e.g. in preparing a common “final proposal” from a set of “initial” ones.

A more practical solution would consist in creating within a large deliberating community one
or more online workgroups of a limited size (say, typically up to 30 people) charged with edit-
ing a common proposal. In such a case, as well as in that of a standalone collaborative work-
group, there should be an established mechanism that supports the workgroup’s efforts to
reach an agreement on a commonly accepted final text, by rewriting the input texts, maybe
even several times.

Rewriting may be done collectively, or by individual workgroup members; in the latter case,
the support system may be helpful in finding out the best candidates for rewriting.

1.4 System-Requested User Actions

In each of those above-presented cases, there appears a need for the support system to assist
the deliberation and/or collaboration process by requesting specific actions to be performed at
specific times by designated participants. These actions may include, in particular: (1) addi-
tional appraisal (evaluation) of a given contribution (proposal or comment) that hasn’t been
sufficiently seen yet, to reliably assign it to some cluster and to rank it within that cluster; (2)
rewriting a poorly written proposal that may nevertheless present some interest for the com-
munity; (3) rewriting a controversial proposal in order to make it more acceptable to a larger
part of the community or of a workgroup.

The idea of system-requested user actions has several aspects, that will be discussed in the
following sections. First, we need an algorithm of selecting the most necessary action(s)
whose performance is to be system-requested at a given time. Then, we need a principle for
selecting the participant(s) the best positioned for successfully performing the requested ac-
tion(s).

Then, we should consider the motives that would prompt participants to perform the re-
quested action rather than to decline, and whether any incentives should be established to cre-
ate such motives.

We will discuss these aspects for each of the above contexts (1) – (3), starting from the third.

2. Rewriting Proposals by Those Who Disagree With Them
The idea of charging the system with the task of requesting from specific participants to re-
write specific proposals with which they disagree has been implemented by one of the authors
in a Web-based collaboration tool “Vilfredo goes to Athens” (http://vilfredo.org). The website
is actually open for participation; its description can be found in [2]. In this system, partici-
pants can state a problem, and then move their own proposals (on how to solve the problem
stated) and evaluate each other’s proposal. The system uses some human based genetic algo-
rithm (HBGA) [6], where the whole process may have several cycles each consisting of a
Page 4                                 Procedures and Methods for Cross-community Online Deliberation


proposal phase and evaluation phase. In every consecutive cycle, a new generation of pro-
posals is written, based on, or inspired by, a filtered selection of the proposals of the previous
generation. The selection is done by extracting a Pareto front [7] of the proposals.

The concept of Pareto front can be explained as follows. Let us say that a proposal A domi-
nates a proposal B if everybody who votes for B also votes for A, though there exist some
participants (at least one) who vote for A but not for B. So we can exclude the proposal B
from the list of proposals without making anyone really unhappy because they are still satis-
fied with one of the available proposals, namely A. The Pareto front then will be the list of
proposals that are not dominated and hence pass the filter.

This particular way in which proposals reach the next generation may lead to some singular
effects. One of such effects is, that sometimes a certain person has a stronger than average
effect on the Pareto front. For example, let us suppose that there are two proposals A and B,
and everybody (except John) who voted for B also voted for A, while at the same time there
are other people who voted for A but not for B. So A “nearly dominates” B. Everybody except
John either prefers A to B, or finds them equally acceptable.

In such a case, easily discoverable by the support system, John can be invited to rewrite A, in
a way he thinks would be acceptable for him while (presumably) still acceptable for those
who currently accept the original proposal A. If John’s version of A, say A’, appears to be
really acceptable for everybody who voted for A, and if John also accepts it (something we
can assume since he wrote it), then A’ will dominate both A and B, and we have shrunk the
Pareto front at least from this one proposal. In a typical generation we may have multiple
people who are invited to rewrite some proposals; moreover, the same proposal can be rewrit-
ten by multiple people (each one trying to make the proposal dominate a different one). Also,
the same person can also be invited to rewrite multiple proposals. In general, the more a per-
son will vote in a different way from the other people (and thus will represent a unique point
of view) the more the system will ask him or her to rewrite other’s proposals.

Field experience has shown that participants tend to welcome the invitation to work on a par-
ticular proposal. In http://vilfredo.org, at every generation each user has many possibilities.
He/she can rewrite another person’s proposal, or try to work out a compromise between two
different positions, or recover a proposal that was lost, trying to reframe it in a different way.
Among all those possibilities it is common for users to feel lost. As time is always limited, a
participant will want to make the most effective action. An invitation presented by the support
system will represent in this sense one of the most cost-effective possible actions, and is thus
generally well received.

3. Clustering, Ranking, and System-Requested Appraisal Actions
Our next step concerns a support system for mass online deliberation (MOD), the need for
which has been discussed above in the introductory chapter of this paper. This system will
include an algorithm of clustering and ranking participants’ contributions—proposals, com-
ments, and more—based on individual appraisals (evaluations) of every contribution by a lim-
ited number of other participants.
Procedures and Methods for Cross-community Online Deliberation                            Page 5


To assure a more objective appraisal of every contribution, esp. immediately after it is written
and uploaded into the system, participants who come online will be invited to evaluate pro-
posals that the system presents to them, at random. This should be done in such a way that
each proposal is evaluated on average the same number of times, and all are evaluated at least
a predefined minimum number of times. Also, at this stage it should not be possible for any
participant to see the evaluation results of other ones, to avoid any unconscious bias. If at the
end of this “blind evaluation” period some proposals have not yet been evaluated enough,
they can be sent for additional evaluation to randomly selected participants.

This may be seen as a simplified blind peer review of contributions that makes use of volun-
tary participation as much as possible. Though, random selection of proposals for evaluation
by a given person is functionally equivalent to random selection of participants (among cur-
rently active ones) for evaluation of a given proposal. Each reviewer thus may be considered
as performing a system-requested action, the one of appraising a new contribution that other-
wise he/she would probably never have any impulse to read.

3.1 Managing Public Responsiveness to System Requests

Experience from existing projects that include users’ actions of appraisal (evaluation) of texts
and other items and/or of their tagging shows that people are typically active enough in per-
forming such actions on a discretionary (at will) basis. It is yet to be discovered how much
the people involved in future large-scale eParticipation activities would be responsive to the
same type of actions when they are not voluntary but system-requested. We can expect suffi-
cient level of responsiveness, considering that possibility to participate and collaborate is the
prise for itself.

If however the public will not be responsive enough, specific incentives could optionally be
installed and managed by the support system. For example, the system could maintain for
every participant two activity counters, one for his/her voluntary actions, esp. for writing his/
her own contributions, another for system-requested actions performed. If the second counter
is too low, new contributions from that participant will be temporarily blocked, pending a suf-
ficient number of system-requested actions performed.

A system that randomly requires participants to do this or that in a moment that may be inop-
portune for a given person, may be considered by participants as too “oppressive”. To avoid
such an undesirable effect, the system may send its requests at any given moment to those
participants only, who manifest by themselves their readiness at this moment to perform
system-requested actions. In this way, the system randomly selects a contribution to be sent to
a given participant, rather than a random participant to read this contribution. The result glob-
ally will be the same.

3.2 Two-Parameter Appraisal: How Well Presented, How Much Do I Agree

Our algorithm of clustering and ranking a large set of participants’ contributions is based on
their double appraisal, first by a few randomly selected participants (peer reviewers), then by
whoever wishes to read and appraise any given contribution. Presumably, the latter ones (dis-
cretionary appraisers) would be helped by the support system when they are looking for con-
tributions that are for them potentially the most interesting ones, and this system-made selec-
Page 6                                 Procedures and Methods for Cross-community Online Deliberation


tion will be made by using said algorithm, as briefly explained below; see [3] for more de-
tails.

So, every participant is expected, when reading a contribution, to appraise it on two different
scales, one for the contribution’s quality, another for the reader’s agreement with the ideas it
contains. The quality of a given text is considered to be its intrinsic characteristic, thus ap-
pealing for an objective evaluation by the reader; while the agreement is indeed a fully sub-
jective one, as it depends on the reader’s convictions, beliefs etc. Hence the two measures can
be theoretically seen as independent (though, as we will see in the next section, they indeed
depend on each other).

The two parameters are then exploited by the system in quite different ways. The quality
grades assigned by individual appraisers are “aggregated” into the quality rank of every con-
tribution; while the individually assigned agreement levels are used by the system to find con-
tributions expressing presumably similar or compatible ideas or opinions. Mathematically,
some metric is defined within the space of all contributions by pairwise comparison of
agreement levels assigned by the same readers (if X agrees with both A and B, this decreases
the distance between A and B; if Y agrees with A but disagrees with B, this increases the dis-
tance). Based on this metric, the system can discover clusters of presumably similar or com-
patible contributions.

Quality ranking is then performed within every cluster. Namely, good quality grades assigned
by appraisers to a contribution will promote it towards the top of a ranked list of all contribu-
tions among those contained in a given cluster. This is indeed much more meaningful than the
ordinary ranking of all contributions in a single list; namely, our method gives voice to mi-
norities, whose opinions are not supported by many.

When a participant wants to read the “most interesting” contributions (or “the most represen-
tative”, etc), the system will suggest him/her few highly ranked contributions within every
cluster, including the smallest clusters. In this way, the participant can easily get acquainted
with the whole spectrum of the currently expressed ideas.

3.3 System Requests for Additional Appraisals

At some point our algorithm may discover that there is not enough data to decide on a given
proposal to which cluster it should go (or maybe it should start a new cluster); or to better dis-
cern the difference between two proposals or two clusters. In such cases the system may re-
quest from participants to make additional appraisals.

At this stage, however, all participants are not equal in their capacity to make an appraisal that
will be decisive for the system. For example, if participant U hasn’t yet read and appraised
neither A nor B, he should do it for both, in order for the system to increase its knowledge
about (dis)similarity between A and B. In contrast, if V has appraised A but not B or vice
versa, then the system can ask her to appraise the other one, thus receiving a valuable infor-
mation by requesting from just one requested action, rather than two in the case of U.
Procedures and Methods for Cross-community Online Deliberation                                           Page 7


4. Rewriting Poorly Written but Agreeable Proposals
Let us now discuss in more details a very simple case of two-parameter appraisal or evalua-
tion, where each participant is invited to characterise a proposal with just one out of three dis-
crete values: “agree”, “disagree”, and “don’t understand”. Despite the simplicity and straight-
forwardness of this method, it is still a “two-parameter” one, and it can provide the system
with a fairly rich information on both the set of proposals on the table, and the community of
participants.

4.1 Agreement Depends Upon Understanding.

Although the quality in which a proposal is made, and how much does a person agree with it,
are seen as independent, they are not fully so. For only inasmuch as we understand a proposal
can we agree or disagree with it. We cannot agree or disagree on something we do not under-
stand. Those two measures are a little bit like light and colour. If there is no light we cannot
perceive any colour, nor does it make any sense to speak about the bandwidth of a light
source when no light source is there. Hence, the brighter a source of light the better we can
distinguish different colours. Here we consider only two "colours" (agree, disagree), and the
brightness would represent how clearly do we understand it (see Figure 1 below; in the
printed version however colours are not represented).




Fig. 1. Space of possible evaluations for a single proposal. Participants will be invited to evaluate every pro-
posal with respect to how clearly do they understand it, and how much do they agree with it. The more a partici-
pant understands a proposal, the more can he/she agree or disagree with it. When a proposal is completely in-
comprehensible, it cannot be rated on the agreement axis.
So the simplest form of evaluation that can still work in our system would have 3 discrete op-
tions: “agree”, “disagree”, and “don’t understand”. More continuous evaluations are also pos-
sible, always keeping in mind that if a person does not understand at all a proposal, he/she
should not be allowed to evaluate how much does he/she agree with it. Also, in a continuous
evaluation (or multi-grade, say with integers from -5 to +5) the span of possible agreement
grades should depend on a level of understanding, so that for a medium-level understanding
the uttermost agreement values, e.g. -5 or +5, should not be allowed. This quite natural re-
striction is graphically represented by our inverted triangle on Figure 1.
Page 8                                 Procedures and Methods for Cross-community Online Deliberation


4.2 Using the Above Two Measures.

Once the measures have been taken and users start to evaluate the existing proposals, the sys-
tem recovers a lot of useful information, much more than what can be deduced from an ordi-
nary “linear” evaluation, even a multi-grade one.

The most obvious information that can be gathered is, first, to what extent a proposal is un-
derstandable, and second, to what extent do people agree with it. But there are also less obvi-
ous data that can be gathered, as, for example : which users are able to write proposals that are
widely understandable; which users understand and agree on a specific proposal; which pro-
posals share the same user-base (in other words, which proposals are agreed upon by the same
users). Each of those characteristics can then be used in a different way.

4.3 Using Agreement Data for Clustering

Here we present the simplest form of our agreement-based clustering algorithm, for the above
3-value appraisal scheme. Suppose we have n proposals, and m users. Let Kn be the complete
weighted undirected graph with n nodes and n*(n-1)/2 edges. Let EAB be the edge from node
A to node B; we can assign to EAB a weight that represents how many people that have voted
for A have also voted for B, and how many people that have voted for B have voted for A.

Thus the weight can be defined as W=|A∩B|/|A∪B|, i.e. the cardinality of (A intersection B)
divided by the cardinality of (A union B); it is a value between 0 and 1, with W=0 if no one
voted for both A and B, and W=1 if everybody who voted for A voted for B and vice versa.

There are many straightforward ways to cluster the nodes in such a weighted graph; the sim-
plest one is to delete every edge that has a weight less than x (see [5]). Clustering the nodes of
a graph is a well researched area, so we can just use one of those existing methods to perform
proposal clustering in a mass online deliberation (see e.g. [4]).

4.4 Using Quality Evaluation

While agreement evaluation can and should be used to cluster proposals, quality evaluation
(i.e. in our case clarity, that is, how understandable they are) can be used for a different pur-
pose. Namely, by counting the proportion of participants who didn’t understood any given
proposal, the system can discover which ones are not clearly written, and then invite some-
body among the participants to rewrite those too obscure proposals.

We do not think it practical to suggest for an author to rewrite his/her own proposal, for we
assume that each author has already done his/her best. But we also do not want to just ask a
random user to rewrite a proposal. Instead, the person to rewrite a proposal that needs to be
clarified should (a) understand the proposal, (b) be able to write (proposals) well, and possi-
bly better than the original author, and (c) agree with the proposal. Although the point (c) is
not strictly necessary, a person that does not agree with a proposal might not wish to spend
time and effort in trying to rewrite it, and might be biased against it.

On the other side, a proposal that needs to be rewritten should (a) be widely considered in-
comprehensible, (b) be such that those people who do understand it would generally support it
Procedures and Methods for Cross-community Online Deliberation                                            Page 9


(otherwise there is no real need to clarify it), and (c) possibly be part of a relatively small
cluster, so that the same ideas are not already present in some similar proposal(s).

If all that is true, then the proposal possibly has some interesting, but not clearly expressed
ideas; ideas that would be lost if they are not explained better. Thanks to the above-described
appraisal method, our support system will know all that information. Hence, for each proposal
that needs to be rewritten, the system can find who should rewrite the proposal, and ask that
person to rewrite it.

Once the new proposal (or a new version of the old proposal) has been written, the initial
author can be contacted and asked if he/she agrees that the new proposal still expresses his/
her ideas. If he/she agrees on that, then the new proposal can be advertised to all those people
who supported the original proposal, and also to those who had declared that they did not un-
derstood it. The new proposal can be presented as "written by 1st author and 2nd author on an
original idea of 1st author". Indeed the process can have several iterations and more co-
authors, though probably few ideas are so complex as to require more than one rewriting.

References
    [1]C. Velikanov. Procedures and Methods for Cross-community Online Deliberation, in: eJournal of eDe-
        mocracy & Open Government (JeDem) Vol.2 No.2, Danube University, Krems, 2010.
    [2]P. Speroni di Fenizio, D. Paterson. Don't Vote, Evolve! in: Proceedings of the Second international con-
         ference on eParticipation (ePart 2010), LNCS, Lausanne (Switzerland), 2010.
    [3]C. Velikanov. Mutual Moderation and Appraisal of Contributions in eParticipation, in: Proceedings of the
        eDem 2010 Conference, Austrian Computer Society, Vienna, 2010, p. 173-182.
    [4]S.E. Schaeffer. Graph clustering, in: Computer Science Review, Volume 1, Issue 1, 2007, p. 27-64.
    [5]Th.H. Cormen, Ch.E. Leiserson, Ronald L. Rivest, Clifford Stein. Introduction to Algorithms (3rd ed.),
        MIT Press, 2009.
    [6]A. Kosorukoff. Human based genetic algorithm, in: 2001 IEEE International Conference on Systems.
    [7]G. Sevan, J.B. Ficici. Pollack Pareto Optimality in Coevolutionary Learning, in: Advances in Artificial
        Life. Proceedings of the 6th European ECAL Conference, Prague, 2001.

Mais conteúdo relacionado

Mais procurados

Hypersoft Social Network Analysis
Hypersoft Social Network AnalysisHypersoft Social Network Analysis
Hypersoft Social Network AnalysisDavidHypersoft
 
Recommender systems in the scope of opinion formation: a model
Recommender systems in the scope of opinion formation: a modelRecommender systems in the scope of opinion formation: a model
Recommender systems in the scope of opinion formation: a modelMarcel Blattner, PhD
 
Rep on the Roll A peer to peer reputation system based on a rolling blockchain
Rep on the Roll A peer to peer reputation system based on a rolling blockchainRep on the Roll A peer to peer reputation system based on a rolling blockchain
Rep on the Roll A peer to peer reputation system based on a rolling blockchainRichard Dennis
 
EffectiveCrowdSourcingForProductFeatureIdeation v18
EffectiveCrowdSourcingForProductFeatureIdeation v18EffectiveCrowdSourcingForProductFeatureIdeation v18
EffectiveCrowdSourcingForProductFeatureIdeation v18Karthikeyan Rajasekharan
 
CS6010 Social Network Analysis Unit V
CS6010 Social Network Analysis Unit VCS6010 Social Network Analysis Unit V
CS6010 Social Network Analysis Unit Vpkaviya
 
The eigen rumor algorithm
The eigen rumor algorithmThe eigen rumor algorithm
The eigen rumor algorithmamooool2000
 
A Guide to Social Network Analysis
A Guide to Social Network AnalysisA Guide to Social Network Analysis
A Guide to Social Network AnalysisOlivier Serrat
 
Mining and analyzing social media part 2 - hicss47 tutorial - dave king
Mining and analyzing social media   part 2 - hicss47 tutorial - dave kingMining and analyzing social media   part 2 - hicss47 tutorial - dave king
Mining and analyzing social media part 2 - hicss47 tutorial - dave kingDave King
 
Big Data: Social Network Analysis
Big Data: Social Network AnalysisBig Data: Social Network Analysis
Big Data: Social Network AnalysisMichel Bruley
 
A Novel Frame Work System Used In Mobile with Cloud Based Environment
A Novel Frame Work System Used In Mobile with Cloud Based EnvironmentA Novel Frame Work System Used In Mobile with Cloud Based Environment
A Novel Frame Work System Used In Mobile with Cloud Based Environmentpaperpublications3
 
SSSW 2016 Cognition Tutorial
SSSW 2016 Cognition TutorialSSSW 2016 Cognition Tutorial
SSSW 2016 Cognition TutorialIrene Celino
 
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...Jonathan Stray
 
Social network analysis course 2010 - 2011
Social network analysis course 2010 - 2011Social network analysis course 2010 - 2011
Social network analysis course 2010 - 2011guillaume ereteo
 
Overview Of Network Analysis Platforms
Overview Of Network Analysis PlatformsOverview Of Network Analysis Platforms
Overview Of Network Analysis PlatformsNoah Flower
 
How to conduct a social network analysis: A tool for empowering teams and wor...
How to conduct a social network analysis: A tool for empowering teams and wor...How to conduct a social network analysis: A tool for empowering teams and wor...
How to conduct a social network analysis: A tool for empowering teams and wor...Jeromy Anglim
 
Social Network Analysis (SNA) 2018
Social Network Analysis  (SNA) 2018Social Network Analysis  (SNA) 2018
Social Network Analysis (SNA) 2018Arsalan Khan
 

Mais procurados (20)

Hypersoft Social Network Analysis
Hypersoft Social Network AnalysisHypersoft Social Network Analysis
Hypersoft Social Network Analysis
 
Recommender systems in the scope of opinion formation: a model
Recommender systems in the scope of opinion formation: a modelRecommender systems in the scope of opinion formation: a model
Recommender systems in the scope of opinion formation: a model
 
Rep on the Roll A peer to peer reputation system based on a rolling blockchain
Rep on the Roll A peer to peer reputation system based on a rolling blockchainRep on the Roll A peer to peer reputation system based on a rolling blockchain
Rep on the Roll A peer to peer reputation system based on a rolling blockchain
 
EffectiveCrowdSourcingForProductFeatureIdeation v18
EffectiveCrowdSourcingForProductFeatureIdeation v18EffectiveCrowdSourcingForProductFeatureIdeation v18
EffectiveCrowdSourcingForProductFeatureIdeation v18
 
CS6010 Social Network Analysis Unit V
CS6010 Social Network Analysis Unit VCS6010 Social Network Analysis Unit V
CS6010 Social Network Analysis Unit V
 
The eigen rumor algorithm
The eigen rumor algorithmThe eigen rumor algorithm
The eigen rumor algorithm
 
CSE509 Lecture 6
CSE509 Lecture 6CSE509 Lecture 6
CSE509 Lecture 6
 
A Guide to Social Network Analysis
A Guide to Social Network AnalysisA Guide to Social Network Analysis
A Guide to Social Network Analysis
 
thesis
thesisthesis
thesis
 
Mining and analyzing social media part 2 - hicss47 tutorial - dave king
Mining and analyzing social media   part 2 - hicss47 tutorial - dave kingMining and analyzing social media   part 2 - hicss47 tutorial - dave king
Mining and analyzing social media part 2 - hicss47 tutorial - dave king
 
Big Data: Social Network Analysis
Big Data: Social Network AnalysisBig Data: Social Network Analysis
Big Data: Social Network Analysis
 
A Novel Frame Work System Used In Mobile with Cloud Based Environment
A Novel Frame Work System Used In Mobile with Cloud Based EnvironmentA Novel Frame Work System Used In Mobile with Cloud Based Environment
A Novel Frame Work System Used In Mobile with Cloud Based Environment
 
SSSW 2016 Cognition Tutorial
SSSW 2016 Cognition TutorialSSSW 2016 Cognition Tutorial
SSSW 2016 Cognition Tutorial
 
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
 
Social network analysis course 2010 - 2011
Social network analysis course 2010 - 2011Social network analysis course 2010 - 2011
Social network analysis course 2010 - 2011
 
Overview Of Network Analysis Platforms
Overview Of Network Analysis PlatformsOverview Of Network Analysis Platforms
Overview Of Network Analysis Platforms
 
Roles In Networks
Roles In NetworksRoles In Networks
Roles In Networks
 
How to conduct a social network analysis: A tool for empowering teams and wor...
How to conduct a social network analysis: A tool for empowering teams and wor...How to conduct a social network analysis: A tool for empowering teams and wor...
How to conduct a social network analysis: A tool for empowering teams and wor...
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysis
 
Social Network Analysis (SNA) 2018
Social Network Analysis  (SNA) 2018Social Network Analysis  (SNA) 2018
Social Network Analysis (SNA) 2018
 

Destaque

Destaque (6)

Avoid Social-Media Burnout
Avoid Social-Media BurnoutAvoid Social-Media Burnout
Avoid Social-Media Burnout
 
Brandon ali herrera lopez
Brandon ali herrera lopezBrandon ali herrera lopez
Brandon ali herrera lopez
 
Reggae survey
Reggae surveyReggae survey
Reggae survey
 
Dont vote evolve
Dont vote evolveDont vote evolve
Dont vote evolve
 
Inuksuk chanelle
Inuksuk chanelleInuksuk chanelle
Inuksuk chanelle
 
Photographic elements
Photographic elementsPhotographic elements
Photographic elements
 

Semelhante a System-generated requests for rewriting proposals

A Literature Survey on Recommendation Systems for Scientific Articles.pdf
A Literature Survey on Recommendation Systems for Scientific Articles.pdfA Literature Survey on Recommendation Systems for Scientific Articles.pdf
A Literature Survey on Recommendation Systems for Scientific Articles.pdfAmber Ford
 
Expectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application DomainExpectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application DomainIJERA Editor
 
Expectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application DomainExpectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application DomainIJERA Editor
 
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...IJERA Editor
 
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...IJERA Editor
 
Vu M Kloos 20071116
Vu M Kloos 20071116Vu M Kloos 20071116
Vu M Kloos 20071116Martin Kloos
 
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...csandit
 
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...cscpconf
 
Global Redirective Practices
Global Redirective PracticesGlobal Redirective Practices
Global Redirective PracticesKshitiz Anand
 
Ieml social recommendersystems
Ieml social recommendersystemsIeml social recommendersystems
Ieml social recommendersystemsAntonio Medina
 
Finding The Voice of A Virtual Community of Practice
Finding The Voice of A Virtual Community of PracticeFinding The Voice of A Virtual Community of Practice
Finding The Voice of A Virtual Community of PracticeConnie White
 
Current trends of opinion mining and sentiment analysis in social networks
Current trends of opinion mining and sentiment analysis in social networksCurrent trends of opinion mining and sentiment analysis in social networks
Current trends of opinion mining and sentiment analysis in social networkseSAT Publishing House
 
Global Redirective Practices
Global Redirective PracticesGlobal Redirective Practices
Global Redirective Practicesadjwilli
 
Activating Online Collaborative Communities
Activating Online Collaborative CommunitiesActivating Online Collaborative Communities
Activating Online Collaborative CommunitiesCommunitySense
 
Social machines: theory design and incentives
Social machines: theory design and incentivesSocial machines: theory design and incentives
Social machines: theory design and incentivesElena Simperl
 
Tutorial Cognition - Irene
Tutorial Cognition - IreneTutorial Cognition - Irene
Tutorial Cognition - IreneSSSW
 
Activating Research Collaboratories with Collaboration Patterns
Activating Research Collaboratories with Collaboration PatternsActivating Research Collaboratories with Collaboration Patterns
Activating Research Collaboratories with Collaboration PatternsCommunitySense
 
Social Computing as Social Computation
Social Computing as Social ComputationSocial Computing as Social Computation
Social Computing as Social ComputationThomas Erickson
 
Open source economy v.1.1
Open source economy v.1.1Open source economy v.1.1
Open source economy v.1.1Tabea Hirzel
 
Global Redirective Practices: an online workshop for a client
Global Redirective Practices: an online workshop for a clientGlobal Redirective Practices: an online workshop for a client
Global Redirective Practices: an online workshop for a clientSean Connolly
 

Semelhante a System-generated requests for rewriting proposals (20)

A Literature Survey on Recommendation Systems for Scientific Articles.pdf
A Literature Survey on Recommendation Systems for Scientific Articles.pdfA Literature Survey on Recommendation Systems for Scientific Articles.pdf
A Literature Survey on Recommendation Systems for Scientific Articles.pdf
 
Expectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application DomainExpectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application Domain
 
Expectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application DomainExpectations for Electronic Debate Platforms as a Function of Application Domain
Expectations for Electronic Debate Platforms as a Function of Application Domain
 
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
 
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
Evaluation the Impact of Human Interaction/Debate on Online News to Improve U...
 
Vu M Kloos 20071116
Vu M Kloos 20071116Vu M Kloos 20071116
Vu M Kloos 20071116
 
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
 
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
AN EXTENDED HYBRID RECOMMENDER SYSTEM BASED ON ASSOCIATION RULES MINING IN DI...
 
Global Redirective Practices
Global Redirective PracticesGlobal Redirective Practices
Global Redirective Practices
 
Ieml social recommendersystems
Ieml social recommendersystemsIeml social recommendersystems
Ieml social recommendersystems
 
Finding The Voice of A Virtual Community of Practice
Finding The Voice of A Virtual Community of PracticeFinding The Voice of A Virtual Community of Practice
Finding The Voice of A Virtual Community of Practice
 
Current trends of opinion mining and sentiment analysis in social networks
Current trends of opinion mining and sentiment analysis in social networksCurrent trends of opinion mining and sentiment analysis in social networks
Current trends of opinion mining and sentiment analysis in social networks
 
Global Redirective Practices
Global Redirective PracticesGlobal Redirective Practices
Global Redirective Practices
 
Activating Online Collaborative Communities
Activating Online Collaborative CommunitiesActivating Online Collaborative Communities
Activating Online Collaborative Communities
 
Social machines: theory design and incentives
Social machines: theory design and incentivesSocial machines: theory design and incentives
Social machines: theory design and incentives
 
Tutorial Cognition - Irene
Tutorial Cognition - IreneTutorial Cognition - Irene
Tutorial Cognition - Irene
 
Activating Research Collaboratories with Collaboration Patterns
Activating Research Collaboratories with Collaboration PatternsActivating Research Collaboratories with Collaboration Patterns
Activating Research Collaboratories with Collaboration Patterns
 
Social Computing as Social Computation
Social Computing as Social ComputationSocial Computing as Social Computation
Social Computing as Social Computation
 
Open source economy v.1.1
Open source economy v.1.1Open source economy v.1.1
Open source economy v.1.1
 
Global Redirective Practices: an online workshop for a client
Global Redirective Practices: an online workshop for a clientGlobal Redirective Practices: an online workshop for a client
Global Redirective Practices: an online workshop for a client
 

Último

SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 

Último (20)

SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAG
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 

System-generated requests for rewriting proposals

  • 1. Page 1 SYSTEM-GENERATED REQUESTS FOR REWRITING PROPOSALS Pietro Speroni di Fenizio1, Cyril Velikanov2 We present an online deliberation system using mutual evaluation in order to col- laboratively develop solutions. Participants submit their proposals and evaluate each other’s proposals; some of them may then be invited by the system to rewrite “problematic” proposals. Two cases are discussed: a proposal supported by many, but not by a given person, who is then invited to rewrite it for making yet more acceptable; and a poorly presented but presumably interesting proposal. The first of these cases has been successfully implemented. Proposals are evalu- ated along two axes—understandability (or clarity, or, more generally, quality), and agreement. The latter is used by the system to cluster proposals according to their ideas, while the former is used both to present the best proposals on top of their clusters, and to find poorly written proposals candidates for rewriting. These functionalities may be considered as important components of a large scale online deliberation system. 1. Introduction Our study presented in this paper relates to open online deliberation and open online collabo- ration. Within the context of eParticipation, online deliberation is indeed an essential aspect; while online collaboration should also be considered as essential if we want eParticipation to be purposeful, that is, looking for a solution to a problem stated, and productive, that is, yield- ing result(s) commonly agreed upon. As eParticipation still remains a rather vague concept, we start with better delimiting the kind of activities we are considering in this paper. Namely, we will consider that a given ePartici- pation activity (project, campaign) involves a number of active participants, i.e. participants who may write their own contributions, but who at least are actively reading and appraising (aka evaluating) each other’s contributions. A contribution basically is a proposal (of how to solve the problem being discussed) or a comment on one or more other contributions (propos- als or comments). Other types of contributions may indeed be considered as well, but we are mostly interested in proposals, and then, to a lesser extent, in comments. Hereafter active participants will be simply called participants; while those other people who, even after registering, do not manifested their presence, simply won’t be considered at all. 1.1 Clustering of Contributions One of our basic assumptions is that an open online deliberation on an important societal is- sue, when it is expected to be purposeful and productive, may attract a really large number of 1Pietro Speroni di Fenizio (speroni@dei.uc.pt ), CISUC, Department of Informatics Engineering, University of Coimbra, Pólo II, 3030-290 Coimbra, Portugal 2 Cyril Velikanov (cvelikanov@gmail.com), “Memorial”, Malyi Karetnyi per. 12, Moscow 127051, Russia, and PoliTech Institute, 67 Saint Bernard St., Brussels 1060, Belgium
  • 2. Page 2 Procedures and Methods for Cross-community Online Deliberation active participants (in the order of thousands, or tens of thousands or more). In this case, it will be termed a mass online deliberation (MOD). Within a MOD, a large number of contri- butions (proposals and comments), all related directly or indirectly to the same problem stated in advance, will be “put on the table” by participants, requiring their reading, appraisal and further commenting. Indeed, an average participant, who devotes for this activity just some part of his/her leisure time, wouldn’t be able to read, understand and meaningfully classify such a mass of competing proposals and of various comments on them. Hence the need of a programmed MOD support system to facilitate this task for participants. However, while considering the task of semantic sorting (clustering) of a large number of proposals/comments on a given issue, we should state that this task cannot be performed automatically by any existing algorithm of automated text analysis—simply because it must analyse not only the content of every proposal (what it is about), but also its intent (what the author suggests, prefers, wants, or rejects...); for the research into algorithms of the intent analysis of texts is still at its very beginning. On the other hand, in an open mass deliberation, semantic clustering of proposals cannot be commissioned to a staff of “facilitators” external to the deliberating community—because this would often bias the course of deliberation; and yet more often, if not always, it would arouse suspicion that such a bias is indeed present. We refer to [1] for a more detailed rationale. Hence, the task of clustering proposals in a deliberation can only be performed collectively by the whole deliberating community, and the total number of items to be considered implies that performance of this task should be distributed among a great number of participants, if not on all of them, acting in a concerted manner. Such a distributed collective action can only be per- formed under the guidance of a MOD support system with a suitable clustering algorithm. 1.2 Ranking of Contributions Within a Cluster The task of clustering proposals should result in grouping together proposals that are similar or compatible in the view of those participants who have read (and appraised) them. Some of those clusters may become rather large, thus necessitating a finer analysis. On the other hand, a given proposal containing some idea may present the idea in a more or less clear, concise and argumented way. On the opposite end of this “quality scale” we may have many poorly formulated proposals, down to barely understandable or completely obscure texts. The latter case taken apart, such a quality ranking, if performed by participants and aggregated by the support system, can further help participants to navigate across a large number of proposals. As for texts that have appeared obscure to most of their readers, such a text nevertheless may have been sufficiently well understood by some of them, who therefore may have been able to emit a further judgement on the ideas it contains—in a form of a simple (dis)agreement or of a comment. Indeed, it would be very desirable to spread this understanding onto the rest of the deliberating community, for a badly formulated idea may nevertheless be valuable and original. This may need partial or full rewriting of a text that appears obscure or is misunder- stood.
  • 3. Procedures and Methods for Cross-community Online Deliberation Page 3 1.3 Collaboration by Rewriting Proposals Now, let us turn to online collaboration—which may be considered either as an integral part of a purposeful online deliberation, or as a separate task. In the former case, it is however hardly imaginable that a very large number of participants may be able to productively col- laborate, e.g. in preparing a common “final proposal” from a set of “initial” ones. A more practical solution would consist in creating within a large deliberating community one or more online workgroups of a limited size (say, typically up to 30 people) charged with edit- ing a common proposal. In such a case, as well as in that of a standalone collaborative work- group, there should be an established mechanism that supports the workgroup’s efforts to reach an agreement on a commonly accepted final text, by rewriting the input texts, maybe even several times. Rewriting may be done collectively, or by individual workgroup members; in the latter case, the support system may be helpful in finding out the best candidates for rewriting. 1.4 System-Requested User Actions In each of those above-presented cases, there appears a need for the support system to assist the deliberation and/or collaboration process by requesting specific actions to be performed at specific times by designated participants. These actions may include, in particular: (1) addi- tional appraisal (evaluation) of a given contribution (proposal or comment) that hasn’t been sufficiently seen yet, to reliably assign it to some cluster and to rank it within that cluster; (2) rewriting a poorly written proposal that may nevertheless present some interest for the com- munity; (3) rewriting a controversial proposal in order to make it more acceptable to a larger part of the community or of a workgroup. The idea of system-requested user actions has several aspects, that will be discussed in the following sections. First, we need an algorithm of selecting the most necessary action(s) whose performance is to be system-requested at a given time. Then, we need a principle for selecting the participant(s) the best positioned for successfully performing the requested ac- tion(s). Then, we should consider the motives that would prompt participants to perform the re- quested action rather than to decline, and whether any incentives should be established to cre- ate such motives. We will discuss these aspects for each of the above contexts (1) – (3), starting from the third. 2. Rewriting Proposals by Those Who Disagree With Them The idea of charging the system with the task of requesting from specific participants to re- write specific proposals with which they disagree has been implemented by one of the authors in a Web-based collaboration tool “Vilfredo goes to Athens” (http://vilfredo.org). The website is actually open for participation; its description can be found in [2]. In this system, partici- pants can state a problem, and then move their own proposals (on how to solve the problem stated) and evaluate each other’s proposal. The system uses some human based genetic algo- rithm (HBGA) [6], where the whole process may have several cycles each consisting of a
  • 4. Page 4 Procedures and Methods for Cross-community Online Deliberation proposal phase and evaluation phase. In every consecutive cycle, a new generation of pro- posals is written, based on, or inspired by, a filtered selection of the proposals of the previous generation. The selection is done by extracting a Pareto front [7] of the proposals. The concept of Pareto front can be explained as follows. Let us say that a proposal A domi- nates a proposal B if everybody who votes for B also votes for A, though there exist some participants (at least one) who vote for A but not for B. So we can exclude the proposal B from the list of proposals without making anyone really unhappy because they are still satis- fied with one of the available proposals, namely A. The Pareto front then will be the list of proposals that are not dominated and hence pass the filter. This particular way in which proposals reach the next generation may lead to some singular effects. One of such effects is, that sometimes a certain person has a stronger than average effect on the Pareto front. For example, let us suppose that there are two proposals A and B, and everybody (except John) who voted for B also voted for A, while at the same time there are other people who voted for A but not for B. So A “nearly dominates” B. Everybody except John either prefers A to B, or finds them equally acceptable. In such a case, easily discoverable by the support system, John can be invited to rewrite A, in a way he thinks would be acceptable for him while (presumably) still acceptable for those who currently accept the original proposal A. If John’s version of A, say A’, appears to be really acceptable for everybody who voted for A, and if John also accepts it (something we can assume since he wrote it), then A’ will dominate both A and B, and we have shrunk the Pareto front at least from this one proposal. In a typical generation we may have multiple people who are invited to rewrite some proposals; moreover, the same proposal can be rewrit- ten by multiple people (each one trying to make the proposal dominate a different one). Also, the same person can also be invited to rewrite multiple proposals. In general, the more a per- son will vote in a different way from the other people (and thus will represent a unique point of view) the more the system will ask him or her to rewrite other’s proposals. Field experience has shown that participants tend to welcome the invitation to work on a par- ticular proposal. In http://vilfredo.org, at every generation each user has many possibilities. He/she can rewrite another person’s proposal, or try to work out a compromise between two different positions, or recover a proposal that was lost, trying to reframe it in a different way. Among all those possibilities it is common for users to feel lost. As time is always limited, a participant will want to make the most effective action. An invitation presented by the support system will represent in this sense one of the most cost-effective possible actions, and is thus generally well received. 3. Clustering, Ranking, and System-Requested Appraisal Actions Our next step concerns a support system for mass online deliberation (MOD), the need for which has been discussed above in the introductory chapter of this paper. This system will include an algorithm of clustering and ranking participants’ contributions—proposals, com- ments, and more—based on individual appraisals (evaluations) of every contribution by a lim- ited number of other participants.
  • 5. Procedures and Methods for Cross-community Online Deliberation Page 5 To assure a more objective appraisal of every contribution, esp. immediately after it is written and uploaded into the system, participants who come online will be invited to evaluate pro- posals that the system presents to them, at random. This should be done in such a way that each proposal is evaluated on average the same number of times, and all are evaluated at least a predefined minimum number of times. Also, at this stage it should not be possible for any participant to see the evaluation results of other ones, to avoid any unconscious bias. If at the end of this “blind evaluation” period some proposals have not yet been evaluated enough, they can be sent for additional evaluation to randomly selected participants. This may be seen as a simplified blind peer review of contributions that makes use of volun- tary participation as much as possible. Though, random selection of proposals for evaluation by a given person is functionally equivalent to random selection of participants (among cur- rently active ones) for evaluation of a given proposal. Each reviewer thus may be considered as performing a system-requested action, the one of appraising a new contribution that other- wise he/she would probably never have any impulse to read. 3.1 Managing Public Responsiveness to System Requests Experience from existing projects that include users’ actions of appraisal (evaluation) of texts and other items and/or of their tagging shows that people are typically active enough in per- forming such actions on a discretionary (at will) basis. It is yet to be discovered how much the people involved in future large-scale eParticipation activities would be responsive to the same type of actions when they are not voluntary but system-requested. We can expect suffi- cient level of responsiveness, considering that possibility to participate and collaborate is the prise for itself. If however the public will not be responsive enough, specific incentives could optionally be installed and managed by the support system. For example, the system could maintain for every participant two activity counters, one for his/her voluntary actions, esp. for writing his/ her own contributions, another for system-requested actions performed. If the second counter is too low, new contributions from that participant will be temporarily blocked, pending a suf- ficient number of system-requested actions performed. A system that randomly requires participants to do this or that in a moment that may be inop- portune for a given person, may be considered by participants as too “oppressive”. To avoid such an undesirable effect, the system may send its requests at any given moment to those participants only, who manifest by themselves their readiness at this moment to perform system-requested actions. In this way, the system randomly selects a contribution to be sent to a given participant, rather than a random participant to read this contribution. The result glob- ally will be the same. 3.2 Two-Parameter Appraisal: How Well Presented, How Much Do I Agree Our algorithm of clustering and ranking a large set of participants’ contributions is based on their double appraisal, first by a few randomly selected participants (peer reviewers), then by whoever wishes to read and appraise any given contribution. Presumably, the latter ones (dis- cretionary appraisers) would be helped by the support system when they are looking for con- tributions that are for them potentially the most interesting ones, and this system-made selec-
  • 6. Page 6 Procedures and Methods for Cross-community Online Deliberation tion will be made by using said algorithm, as briefly explained below; see [3] for more de- tails. So, every participant is expected, when reading a contribution, to appraise it on two different scales, one for the contribution’s quality, another for the reader’s agreement with the ideas it contains. The quality of a given text is considered to be its intrinsic characteristic, thus ap- pealing for an objective evaluation by the reader; while the agreement is indeed a fully sub- jective one, as it depends on the reader’s convictions, beliefs etc. Hence the two measures can be theoretically seen as independent (though, as we will see in the next section, they indeed depend on each other). The two parameters are then exploited by the system in quite different ways. The quality grades assigned by individual appraisers are “aggregated” into the quality rank of every con- tribution; while the individually assigned agreement levels are used by the system to find con- tributions expressing presumably similar or compatible ideas or opinions. Mathematically, some metric is defined within the space of all contributions by pairwise comparison of agreement levels assigned by the same readers (if X agrees with both A and B, this decreases the distance between A and B; if Y agrees with A but disagrees with B, this increases the dis- tance). Based on this metric, the system can discover clusters of presumably similar or com- patible contributions. Quality ranking is then performed within every cluster. Namely, good quality grades assigned by appraisers to a contribution will promote it towards the top of a ranked list of all contribu- tions among those contained in a given cluster. This is indeed much more meaningful than the ordinary ranking of all contributions in a single list; namely, our method gives voice to mi- norities, whose opinions are not supported by many. When a participant wants to read the “most interesting” contributions (or “the most represen- tative”, etc), the system will suggest him/her few highly ranked contributions within every cluster, including the smallest clusters. In this way, the participant can easily get acquainted with the whole spectrum of the currently expressed ideas. 3.3 System Requests for Additional Appraisals At some point our algorithm may discover that there is not enough data to decide on a given proposal to which cluster it should go (or maybe it should start a new cluster); or to better dis- cern the difference between two proposals or two clusters. In such cases the system may re- quest from participants to make additional appraisals. At this stage, however, all participants are not equal in their capacity to make an appraisal that will be decisive for the system. For example, if participant U hasn’t yet read and appraised neither A nor B, he should do it for both, in order for the system to increase its knowledge about (dis)similarity between A and B. In contrast, if V has appraised A but not B or vice versa, then the system can ask her to appraise the other one, thus receiving a valuable infor- mation by requesting from just one requested action, rather than two in the case of U.
  • 7. Procedures and Methods for Cross-community Online Deliberation Page 7 4. Rewriting Poorly Written but Agreeable Proposals Let us now discuss in more details a very simple case of two-parameter appraisal or evalua- tion, where each participant is invited to characterise a proposal with just one out of three dis- crete values: “agree”, “disagree”, and “don’t understand”. Despite the simplicity and straight- forwardness of this method, it is still a “two-parameter” one, and it can provide the system with a fairly rich information on both the set of proposals on the table, and the community of participants. 4.1 Agreement Depends Upon Understanding. Although the quality in which a proposal is made, and how much does a person agree with it, are seen as independent, they are not fully so. For only inasmuch as we understand a proposal can we agree or disagree with it. We cannot agree or disagree on something we do not under- stand. Those two measures are a little bit like light and colour. If there is no light we cannot perceive any colour, nor does it make any sense to speak about the bandwidth of a light source when no light source is there. Hence, the brighter a source of light the better we can distinguish different colours. Here we consider only two "colours" (agree, disagree), and the brightness would represent how clearly do we understand it (see Figure 1 below; in the printed version however colours are not represented). Fig. 1. Space of possible evaluations for a single proposal. Participants will be invited to evaluate every pro- posal with respect to how clearly do they understand it, and how much do they agree with it. The more a partici- pant understands a proposal, the more can he/she agree or disagree with it. When a proposal is completely in- comprehensible, it cannot be rated on the agreement axis. So the simplest form of evaluation that can still work in our system would have 3 discrete op- tions: “agree”, “disagree”, and “don’t understand”. More continuous evaluations are also pos- sible, always keeping in mind that if a person does not understand at all a proposal, he/she should not be allowed to evaluate how much does he/she agree with it. Also, in a continuous evaluation (or multi-grade, say with integers from -5 to +5) the span of possible agreement grades should depend on a level of understanding, so that for a medium-level understanding the uttermost agreement values, e.g. -5 or +5, should not be allowed. This quite natural re- striction is graphically represented by our inverted triangle on Figure 1.
  • 8. Page 8 Procedures and Methods for Cross-community Online Deliberation 4.2 Using the Above Two Measures. Once the measures have been taken and users start to evaluate the existing proposals, the sys- tem recovers a lot of useful information, much more than what can be deduced from an ordi- nary “linear” evaluation, even a multi-grade one. The most obvious information that can be gathered is, first, to what extent a proposal is un- derstandable, and second, to what extent do people agree with it. But there are also less obvi- ous data that can be gathered, as, for example : which users are able to write proposals that are widely understandable; which users understand and agree on a specific proposal; which pro- posals share the same user-base (in other words, which proposals are agreed upon by the same users). Each of those characteristics can then be used in a different way. 4.3 Using Agreement Data for Clustering Here we present the simplest form of our agreement-based clustering algorithm, for the above 3-value appraisal scheme. Suppose we have n proposals, and m users. Let Kn be the complete weighted undirected graph with n nodes and n*(n-1)/2 edges. Let EAB be the edge from node A to node B; we can assign to EAB a weight that represents how many people that have voted for A have also voted for B, and how many people that have voted for B have voted for A. Thus the weight can be defined as W=|A∩B|/|A∪B|, i.e. the cardinality of (A intersection B) divided by the cardinality of (A union B); it is a value between 0 and 1, with W=0 if no one voted for both A and B, and W=1 if everybody who voted for A voted for B and vice versa. There are many straightforward ways to cluster the nodes in such a weighted graph; the sim- plest one is to delete every edge that has a weight less than x (see [5]). Clustering the nodes of a graph is a well researched area, so we can just use one of those existing methods to perform proposal clustering in a mass online deliberation (see e.g. [4]). 4.4 Using Quality Evaluation While agreement evaluation can and should be used to cluster proposals, quality evaluation (i.e. in our case clarity, that is, how understandable they are) can be used for a different pur- pose. Namely, by counting the proportion of participants who didn’t understood any given proposal, the system can discover which ones are not clearly written, and then invite some- body among the participants to rewrite those too obscure proposals. We do not think it practical to suggest for an author to rewrite his/her own proposal, for we assume that each author has already done his/her best. But we also do not want to just ask a random user to rewrite a proposal. Instead, the person to rewrite a proposal that needs to be clarified should (a) understand the proposal, (b) be able to write (proposals) well, and possi- bly better than the original author, and (c) agree with the proposal. Although the point (c) is not strictly necessary, a person that does not agree with a proposal might not wish to spend time and effort in trying to rewrite it, and might be biased against it. On the other side, a proposal that needs to be rewritten should (a) be widely considered in- comprehensible, (b) be such that those people who do understand it would generally support it
  • 9. Procedures and Methods for Cross-community Online Deliberation Page 9 (otherwise there is no real need to clarify it), and (c) possibly be part of a relatively small cluster, so that the same ideas are not already present in some similar proposal(s). If all that is true, then the proposal possibly has some interesting, but not clearly expressed ideas; ideas that would be lost if they are not explained better. Thanks to the above-described appraisal method, our support system will know all that information. Hence, for each proposal that needs to be rewritten, the system can find who should rewrite the proposal, and ask that person to rewrite it. Once the new proposal (or a new version of the old proposal) has been written, the initial author can be contacted and asked if he/she agrees that the new proposal still expresses his/ her ideas. If he/she agrees on that, then the new proposal can be advertised to all those people who supported the original proposal, and also to those who had declared that they did not un- derstood it. The new proposal can be presented as "written by 1st author and 2nd author on an original idea of 1st author". Indeed the process can have several iterations and more co- authors, though probably few ideas are so complex as to require more than one rewriting. References [1]C. Velikanov. Procedures and Methods for Cross-community Online Deliberation, in: eJournal of eDe- mocracy & Open Government (JeDem) Vol.2 No.2, Danube University, Krems, 2010. [2]P. Speroni di Fenizio, D. Paterson. Don't Vote, Evolve! in: Proceedings of the Second international con- ference on eParticipation (ePart 2010), LNCS, Lausanne (Switzerland), 2010. [3]C. Velikanov. Mutual Moderation and Appraisal of Contributions in eParticipation, in: Proceedings of the eDem 2010 Conference, Austrian Computer Society, Vienna, 2010, p. 173-182. [4]S.E. Schaeffer. Graph clustering, in: Computer Science Review, Volume 1, Issue 1, 2007, p. 27-64. [5]Th.H. Cormen, Ch.E. Leiserson, Ronald L. Rivest, Clifford Stein. Introduction to Algorithms (3rd ed.), MIT Press, 2009. [6]A. Kosorukoff. Human based genetic algorithm, in: 2001 IEEE International Conference on Systems. [7]G. Sevan, J.B. Ficici. Pollack Pareto Optimality in Coevolutionary Learning, in: Advances in Artificial Life. Proceedings of the 6th European ECAL Conference, Prague, 2001.