Overview¶
Functions that can be used to compare and contrast voting methods.
Profiles with Different Winners¶
- pref_voting.analysis.find_profiles_with_different_winners(vms, numbers_of_candidates=[3, 4, 5], numbers_of_voters=[5, 25, 50, 100], all_unique_winners=False, show_profiles=True, show_margin_graphs=True, show_winning_sets=True, show_rankings_counts=False, return_multiple_profiles=True, probmod='IC', num_trials=10000)[source]¶
Given a list of voting methods, search for profiles with different winning sets.
- Parameters:
vms (list(functions)) – A list of voting methods,
numbers_of_candidates (list(int), default = [3, 4, 5]) – The numbers of candidates to check.
numbers_of_voters (list(int), default = [5, 25, 50, 100]) – The numbers of voters to check.
all_unique_winners (bool, default = False) – If True, only return profiles in which each voting method has a unique winner.
show_profiles (bool, default=True) – If True, show profiles with different winning sets for the voting methods when discovered.
show_margin_graphs (bool, default=True) – If True, show margin graphs of the profiles with different winning sets for the voting methods when discovered.
show_winning_sets (bool, default=True) – If True, show the different winning sets for the voting methods when discovered.
show_rankings_counts (bool, default=True) – If True, show the rankings and counts of the profiles with different winning sets for the voting methods.
return_multiple_profiles (bool, default=True) – If True, return all profiles that are found.
probmod (str, default="IC") – The probability model to be passed to the
generate_profile
methodnum_trials (int, default=10000) – The number of profiles to check for different winning sets.
Condorcet Efficiency¶
- pref_voting.analysis.condorcet_efficiency_data(vms, numbers_of_candidates=[3, 4, 5], numbers_of_voters=[4, 10, 20, 50, 100, 500, 1000], probmods=['IC'], probmod_params=None, num_trials=10000, use_parallel=True, num_cpus=12)[source]¶
Returns a Pandas DataFrame with the Condorcet efficiency of a list of voting methods.
- Parameters:
vms (list(functions)) – A list of voting methods,
numbers_of_candidates (list(int), default = [3, 4, 5]) – The numbers of candidates to check.
numbers_of_voters (list(int), default = [5, 25, 50, 100]) – The numbers of voters to check.
probmod (str, default="IC") – The probability model to be passed to the
generate_profile
methodnum_trials (int, default=10000) – The number of profiles to check for different winning sets.
use_parallel (bool, default=True) – If True, then use parallel processing.
num_cpus (int, default=12) – The number of (virtual) cpus to use if using parallel processing.
Axiom Violations¶
- pref_voting.analysis.axiom_violations_data(axioms, vms, numbers_of_candidates=[3, 4, 5], numbers_of_voters=[4, 5, 10, 11, 20, 21, 50, 51, 100, 101, 500, 501, 1000, 1001], probmods=['IC'], num_trials=10000, verbose=False, use_parallel=True, num_cpus=12)[source]¶
Returns a Pandas DataFrame with the Condorcet efficiency of a list of voting methods.
- Parameters:
vms (list(functions)) – A list of voting methods,
numbers_of_candidates (list(int), default = [3, 4, 5]) – The numbers of candidates to check.
numbers_of_voters (list(int), default = [5, 25, 50, 100]) – The numbers of voters to check.
probmod (str, default="IC") – The probability model to be passed to the
generate_profile
methodnum_trials (int, default=10000) – The number of profiles to check for different winning sets.
use_parallel (bool, default=True) – If True, then use parallel processing.
num_cpus (int, default=12) – The number of (virtual) cpus to use if using parallel processing.
Binomial Confidence Interval¶
.. autofunction:: pref_voting.analysis.binomial_confidence_interval
Means with Estimated Standard Error¶
- pref_voting.analysis.means_with_estimated_standard_error(generate_samples, max_std_error, initial_trials=1000, step_trials=1000, min_num_trials=10000, max_num_trials=None, verbose=False)[source]¶
For each list of numbers produced by generate_samples, returns the means, the estimated standard error (https://en.wikipedia.org/wiki/Standard_error) of the means, the variance of the samples, and the total number of trials.
Uses the estimated_variance_of_sampling_dist (as described in https://berkeley-stat243.github.io/stat243-fall-2023/units/unit9-sim.html) and estimated_std_error functions.
- Parameters:
generate_samples (function) – A function that a 2d numpy array of samples. It should take two arguments: num_samples and step (only used if samples are drawn from a pre-computed source in order to ensure that we get new samples during the while loop below).
max_std_error (float) – The desired estimated standard error for the mean of each sample.
initial_trials (int, default=1000) – The number of samples to initially generate.
step_trials (int, default=1000) – The number of samples to generate in each step.
min_num_trials (int, default=10000) – The minimum number of trials to run.
max_num_trials (int, default=None) – If not None, then the maximum number of trials to run.
verbose (bool, default=False) – If True, then print progress information.
- Returns:
A tuple (means, est_std_errors, variances, num_trials) where means is an array of the means of the samples, est_std_errors is an array of estimated standard errors of the samples, variances is an array of the variances of the samples, and num_trials is the total number of trials.